00:00:00.000 Started by upstream project "autotest-per-patch" build number 131951 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.059 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.059 The recommended git tool is: git 00:00:00.060 using credential 00000000-0000-0000-0000-000000000002 00:00:00.061 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.088 Fetching changes from the remote Git repository 00:00:00.090 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.132 Using shallow fetch with depth 1 00:00:00.132 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.132 > git --version # timeout=10 00:00:00.171 > git --version # 'git version 2.39.2' 00:00:00.171 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.202 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.202 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.253 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.266 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.281 Checking out Revision 44e7d6069a399ee2647233b387d68a938882e7b7 (FETCH_HEAD) 00:00:02.281 > git config core.sparsecheckout # timeout=10 00:00:02.294 > git read-tree -mu HEAD # timeout=10 00:00:02.311 > git checkout -f 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=5 00:00:02.330 Commit message: "scripts/bmc: Rework Get NIC Info cmd parser" 00:00:02.330 > git rev-list --no-walk 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=10 00:00:02.488 [Pipeline] Start of Pipeline 00:00:02.503 [Pipeline] library 00:00:02.504 Loading library shm_lib@master 00:00:02.504 Library shm_lib@master is cached. Copying from home. 00:00:02.519 [Pipeline] node 00:00:02.535 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:02.536 [Pipeline] { 00:00:02.547 [Pipeline] catchError 00:00:02.549 [Pipeline] { 00:00:02.558 [Pipeline] wrap 00:00:02.564 [Pipeline] { 00:00:02.570 [Pipeline] stage 00:00:02.571 [Pipeline] { (Prologue) 00:00:02.756 [Pipeline] sh 00:00:03.042 + logger -p user.info -t JENKINS-CI 00:00:03.060 [Pipeline] echo 00:00:03.062 Node: CYP9 00:00:03.068 [Pipeline] sh 00:00:03.371 [Pipeline] setCustomBuildProperty 00:00:03.381 [Pipeline] echo 00:00:03.382 Cleanup processes 00:00:03.388 [Pipeline] sh 00:00:03.677 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.677 694686 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.690 [Pipeline] sh 00:00:03.977 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.977 ++ grep -v 'sudo pgrep' 00:00:03.977 ++ awk '{print $1}' 00:00:03.977 + sudo kill -9 00:00:03.977 + true 00:00:03.989 [Pipeline] cleanWs 00:00:03.997 [WS-CLEANUP] Deleting project workspace... 00:00:03.997 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.004 [WS-CLEANUP] done 00:00:04.006 [Pipeline] setCustomBuildProperty 00:00:04.016 [Pipeline] sh 00:00:04.303 + sudo git config --global --replace-all safe.directory '*' 00:00:04.391 [Pipeline] httpRequest 00:00:04.873 [Pipeline] echo 00:00:04.874 Sorcerer 10.211.164.101 is alive 00:00:04.882 [Pipeline] retry 00:00:04.883 [Pipeline] { 00:00:04.895 [Pipeline] httpRequest 00:00:04.901 HttpMethod: GET 00:00:04.901 URL: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:04.902 Sending request to url: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:04.906 Response Code: HTTP/1.1 200 OK 00:00:04.906 Success: Status code 200 is in the accepted range: 200,404 00:00:04.907 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:05.189 [Pipeline] } 00:00:05.202 [Pipeline] // retry 00:00:05.209 [Pipeline] sh 00:00:05.492 + tar --no-same-owner -xf jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:05.509 [Pipeline] httpRequest 00:00:05.856 [Pipeline] echo 00:00:05.857 Sorcerer 10.211.164.101 is alive 00:00:05.866 [Pipeline] retry 00:00:05.868 [Pipeline] { 00:00:05.884 [Pipeline] httpRequest 00:00:05.888 HttpMethod: GET 00:00:05.889 URL: http://10.211.164.101/packages/spdk_1953a49150ed6e33360f8250ae9ac09888256f32.tar.gz 00:00:05.889 Sending request to url: http://10.211.164.101/packages/spdk_1953a49150ed6e33360f8250ae9ac09888256f32.tar.gz 00:00:05.901 Response Code: HTTP/1.1 200 OK 00:00:05.901 Success: Status code 200 is in the accepted range: 200,404 00:00:05.902 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_1953a49150ed6e33360f8250ae9ac09888256f32.tar.gz 00:00:45.522 [Pipeline] } 00:00:45.546 [Pipeline] // retry 00:00:45.556 [Pipeline] sh 00:00:45.852 + tar --no-same-owner -xf spdk_1953a49150ed6e33360f8250ae9ac09888256f32.tar.gz 00:00:49.201 [Pipeline] sh 00:00:49.489 + git -C spdk log --oneline -n5 00:00:49.489 1953a4915 AE4DMA : Added AMD user space DMA driver 00:00:49.489 12fc2abf1 test: Remove autopackage.sh 00:00:49.489 83ba90867 fio/bdev: fix typo in README 00:00:49.489 45379ed84 module/compress: Cleanup vol data, when claim fails 00:00:49.489 0afe95a3a bdev/nvme: use bdev_nvme linker script 00:00:49.502 [Pipeline] } 00:00:49.517 [Pipeline] // stage 00:00:49.526 [Pipeline] stage 00:00:49.528 [Pipeline] { (Prepare) 00:00:49.545 [Pipeline] writeFile 00:00:49.560 [Pipeline] sh 00:00:49.856 + logger -p user.info -t JENKINS-CI 00:00:49.868 [Pipeline] sh 00:00:50.156 + logger -p user.info -t JENKINS-CI 00:00:50.169 [Pipeline] sh 00:00:50.458 + cat autorun-spdk.conf 00:00:50.458 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.459 SPDK_TEST_NVMF=1 00:00:50.459 SPDK_TEST_NVME_CLI=1 00:00:50.459 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.459 SPDK_TEST_NVMF_NICS=e810 00:00:50.459 SPDK_TEST_VFIOUSER=1 00:00:50.459 SPDK_RUN_UBSAN=1 00:00:50.459 NET_TYPE=phy 00:00:50.467 RUN_NIGHTLY=0 00:00:50.473 [Pipeline] readFile 00:00:50.504 [Pipeline] withEnv 00:00:50.507 [Pipeline] { 00:00:50.520 [Pipeline] sh 00:00:50.810 + set -ex 00:00:50.810 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:50.810 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:50.810 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.810 ++ SPDK_TEST_NVMF=1 00:00:50.810 ++ SPDK_TEST_NVME_CLI=1 00:00:50.810 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.810 ++ SPDK_TEST_NVMF_NICS=e810 00:00:50.810 ++ SPDK_TEST_VFIOUSER=1 00:00:50.810 ++ SPDK_RUN_UBSAN=1 00:00:50.810 ++ NET_TYPE=phy 00:00:50.810 ++ RUN_NIGHTLY=0 00:00:50.810 + case $SPDK_TEST_NVMF_NICS in 00:00:50.810 + DRIVERS=ice 00:00:50.810 + [[ tcp == \r\d\m\a ]] 00:00:50.810 + [[ -n ice ]] 00:00:50.810 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:50.810 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:50.810 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:50.810 rmmod: ERROR: Module irdma is not currently loaded 00:00:50.810 rmmod: ERROR: Module i40iw is not currently loaded 00:00:50.810 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:50.810 + true 00:00:50.810 + for D in $DRIVERS 00:00:50.810 + sudo modprobe ice 00:00:50.810 + exit 0 00:00:50.821 [Pipeline] } 00:00:50.840 [Pipeline] // withEnv 00:00:50.845 [Pipeline] } 00:00:50.863 [Pipeline] // stage 00:00:50.872 [Pipeline] catchError 00:00:50.874 [Pipeline] { 00:00:50.887 [Pipeline] timeout 00:00:50.888 Timeout set to expire in 1 hr 0 min 00:00:50.889 [Pipeline] { 00:00:50.903 [Pipeline] stage 00:00:50.905 [Pipeline] { (Tests) 00:00:50.919 [Pipeline] sh 00:00:51.212 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:51.212 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:51.212 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:51.212 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:51.212 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:51.212 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:51.212 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:51.212 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:51.212 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:51.212 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:51.212 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:51.212 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:51.212 + source /etc/os-release 00:00:51.212 ++ NAME='Fedora Linux' 00:00:51.212 ++ VERSION='39 (Cloud Edition)' 00:00:51.212 ++ ID=fedora 00:00:51.212 ++ VERSION_ID=39 00:00:51.212 ++ VERSION_CODENAME= 00:00:51.212 ++ PLATFORM_ID=platform:f39 00:00:51.212 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:51.212 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:51.212 ++ LOGO=fedora-logo-icon 00:00:51.212 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:51.212 ++ HOME_URL=https://fedoraproject.org/ 00:00:51.212 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:51.212 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:51.212 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:51.212 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:51.212 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:51.212 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:51.212 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:51.212 ++ SUPPORT_END=2024-11-12 00:00:51.212 ++ VARIANT='Cloud Edition' 00:00:51.212 ++ VARIANT_ID=cloud 00:00:51.212 + uname -a 00:00:51.212 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:51.212 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:54.516 Hugepages 00:00:54.516 node hugesize free / total 00:00:54.516 node0 1048576kB 0 / 0 00:00:54.516 node0 2048kB 0 / 0 00:00:54.516 node1 1048576kB 0 / 0 00:00:54.516 node1 2048kB 0 / 0 00:00:54.516 00:00:54.516 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:54.516 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:54.516 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:54.516 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:54.516 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:54.516 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:54.516 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:54.516 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:54.516 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:54.516 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:54.516 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:54.516 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:54.516 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:54.516 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:54.516 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:54.516 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:54.516 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:54.516 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:54.516 + rm -f /tmp/spdk-ld-path 00:00:54.516 + source autorun-spdk.conf 00:00:54.516 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:54.516 ++ SPDK_TEST_NVMF=1 00:00:54.516 ++ SPDK_TEST_NVME_CLI=1 00:00:54.516 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:54.516 ++ SPDK_TEST_NVMF_NICS=e810 00:00:54.516 ++ SPDK_TEST_VFIOUSER=1 00:00:54.516 ++ SPDK_RUN_UBSAN=1 00:00:54.516 ++ NET_TYPE=phy 00:00:54.516 ++ RUN_NIGHTLY=0 00:00:54.516 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:54.516 + [[ -n '' ]] 00:00:54.516 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:54.516 + for M in /var/spdk/build-*-manifest.txt 00:00:54.516 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:54.516 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:54.516 + for M in /var/spdk/build-*-manifest.txt 00:00:54.516 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:54.516 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:54.516 + for M in /var/spdk/build-*-manifest.txt 00:00:54.516 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:54.516 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:54.516 ++ uname 00:00:54.516 + [[ Linux == \L\i\n\u\x ]] 00:00:54.516 + sudo dmesg -T 00:00:54.516 + sudo dmesg --clear 00:00:54.516 + dmesg_pid=695662 00:00:54.516 + [[ Fedora Linux == FreeBSD ]] 00:00:54.516 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:54.516 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:54.516 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:54.516 + [[ -x /usr/src/fio-static/fio ]] 00:00:54.516 + export FIO_BIN=/usr/src/fio-static/fio 00:00:54.516 + FIO_BIN=/usr/src/fio-static/fio 00:00:54.516 + sudo dmesg -Tw 00:00:54.516 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:54.516 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:54.516 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:54.516 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:54.516 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:54.516 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:54.516 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:54.516 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:54.516 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:54.516 13:47:52 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:54.516 13:47:52 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:54.516 13:47:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:54.516 13:47:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:00:54.516 13:47:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:00:54.516 13:47:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:54.516 13:47:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:00:54.516 13:47:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:00:54.516 13:47:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:00:54.516 13:47:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:00:54.516 13:47:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:00:54.516 13:47:52 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:00:54.516 13:47:52 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:54.777 13:47:52 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:54.777 13:47:52 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:54.777 13:47:52 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:54.777 13:47:52 -- scripts/common.sh@547 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:54.777 13:47:52 -- scripts/common.sh@555 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:54.777 13:47:52 -- scripts/common.sh@556 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:54.777 13:47:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:54.777 13:47:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:54.777 13:47:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:54.777 13:47:52 -- paths/export.sh@5 -- $ export PATH 00:00:54.777 13:47:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:54.777 13:47:52 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:54.777 13:47:52 -- common/autobuild_common.sh@486 -- $ date +%s 00:00:54.777 13:47:52 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730292472.XXXXXX 00:00:54.777 13:47:52 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730292472.Em53tS 00:00:54.777 13:47:52 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:00:54.777 13:47:52 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:00:54.777 13:47:52 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:54.777 13:47:52 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:54.777 13:47:52 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:54.777 13:47:52 -- common/autobuild_common.sh@502 -- $ get_config_params 00:00:54.777 13:47:52 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:00:54.777 13:47:52 -- common/autotest_common.sh@10 -- $ set +x 00:00:54.777 13:47:52 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:54.777 13:47:52 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:00:54.777 13:47:52 -- pm/common@17 -- $ local monitor 00:00:54.777 13:47:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:54.777 13:47:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:54.777 13:47:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:54.777 13:47:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:54.777 13:47:52 -- pm/common@21 -- $ date +%s 00:00:54.777 13:47:52 -- pm/common@25 -- $ sleep 1 00:00:54.777 13:47:52 -- pm/common@21 -- $ date +%s 00:00:54.777 13:47:52 -- pm/common@21 -- $ date +%s 00:00:54.777 13:47:52 -- pm/common@21 -- $ date +%s 00:00:54.777 13:47:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730292472 00:00:54.777 13:47:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730292472 00:00:54.777 13:47:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730292472 00:00:54.777 13:47:52 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730292472 00:00:54.777 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730292472_collect-cpu-load.pm.log 00:00:54.777 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730292472_collect-vmstat.pm.log 00:00:54.777 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730292472_collect-cpu-temp.pm.log 00:00:54.777 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730292472_collect-bmc-pm.bmc.pm.log 00:00:55.719 13:47:53 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:00:55.719 13:47:53 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:55.719 13:47:53 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:55.719 13:47:53 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:55.719 13:47:53 -- spdk/autobuild.sh@16 -- $ date -u 00:00:55.719 Wed Oct 30 12:47:53 PM UTC 2024 00:00:55.719 13:47:53 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:55.719 v25.01-pre-124-g1953a4915 00:00:55.719 13:47:53 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:55.719 13:47:53 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:55.719 13:47:53 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:55.719 13:47:53 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:00:55.719 13:47:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:00:55.719 13:47:53 -- common/autotest_common.sh@10 -- $ set +x 00:00:55.719 ************************************ 00:00:55.719 START TEST ubsan 00:00:55.719 ************************************ 00:00:55.719 13:47:54 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:00:55.719 using ubsan 00:00:55.719 00:00:55.719 real 0m0.001s 00:00:55.719 user 0m0.000s 00:00:55.719 sys 0m0.000s 00:00:55.719 13:47:54 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:00:55.719 13:47:54 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:55.719 ************************************ 00:00:55.719 END TEST ubsan 00:00:55.719 ************************************ 00:00:55.979 13:47:54 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:55.979 13:47:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:55.979 13:47:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:55.979 13:47:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:55.979 13:47:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:55.979 13:47:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:55.979 13:47:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:55.979 13:47:54 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:55.979 13:47:54 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:55.979 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:55.979 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:56.549 Using 'verbs' RDMA provider 00:01:12.390 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:24.714 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:25.287 Creating mk/config.mk...done. 00:01:25.287 Creating mk/cc.flags.mk...done. 00:01:25.287 Type 'make' to build. 00:01:25.287 13:48:23 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:25.287 13:48:23 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:25.287 13:48:23 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:25.287 13:48:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.287 ************************************ 00:01:25.287 START TEST make 00:01:25.287 ************************************ 00:01:25.287 13:48:23 make -- common/autotest_common.sh@1129 -- $ make -j144 00:01:25.547 make[1]: Nothing to be done for 'all'. 00:01:26.935 The Meson build system 00:01:26.935 Version: 1.5.0 00:01:26.935 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:26.935 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:26.935 Build type: native build 00:01:26.935 Project name: libvfio-user 00:01:26.935 Project version: 0.0.1 00:01:26.935 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:26.935 C linker for the host machine: cc ld.bfd 2.40-14 00:01:26.935 Host machine cpu family: x86_64 00:01:26.935 Host machine cpu: x86_64 00:01:26.935 Run-time dependency threads found: YES 00:01:26.935 Library dl found: YES 00:01:26.935 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:26.935 Run-time dependency json-c found: YES 0.17 00:01:26.935 Run-time dependency cmocka found: YES 1.1.7 00:01:26.935 Program pytest-3 found: NO 00:01:26.935 Program flake8 found: NO 00:01:26.935 Program misspell-fixer found: NO 00:01:26.935 Program restructuredtext-lint found: NO 00:01:26.935 Program valgrind found: YES (/usr/bin/valgrind) 00:01:26.935 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:26.935 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:26.935 Compiler for C supports arguments -Wwrite-strings: YES 00:01:26.935 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:26.935 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:26.935 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:26.935 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:26.935 Build targets in project: 8 00:01:26.935 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:26.935 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:26.935 00:01:26.935 libvfio-user 0.0.1 00:01:26.935 00:01:26.935 User defined options 00:01:26.935 buildtype : debug 00:01:26.935 default_library: shared 00:01:26.935 libdir : /usr/local/lib 00:01:26.935 00:01:26.935 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:27.504 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:27.504 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:27.504 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:27.504 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:27.504 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:27.504 [5/37] Compiling C object samples/null.p/null.c.o 00:01:27.504 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:27.504 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:27.504 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:27.504 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:27.765 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:27.765 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:27.765 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:27.765 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:27.765 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:27.765 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:27.765 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:27.765 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:27.765 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:27.765 [19/37] Compiling C object samples/server.p/server.c.o 00:01:27.765 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:27.765 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:27.765 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:27.765 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:27.765 [24/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:27.765 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:27.765 [26/37] Compiling C object samples/client.p/client.c.o 00:01:27.765 [27/37] Linking target samples/client 00:01:27.765 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:27.765 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:27.765 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:27.765 [31/37] Linking target test/unit_tests 00:01:28.026 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:28.026 [33/37] Linking target samples/server 00:01:28.026 [34/37] Linking target samples/null 00:01:28.026 [35/37] Linking target samples/lspci 00:01:28.026 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:28.026 [37/37] Linking target samples/gpio-pci-idio-16 00:01:28.026 INFO: autodetecting backend as ninja 00:01:28.026 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:28.026 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:28.599 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:28.599 ninja: no work to do. 00:01:33.894 The Meson build system 00:01:33.894 Version: 1.5.0 00:01:33.894 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:33.894 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:33.894 Build type: native build 00:01:33.894 Program cat found: YES (/usr/bin/cat) 00:01:33.894 Project name: DPDK 00:01:33.894 Project version: 24.03.0 00:01:33.894 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:33.894 C linker for the host machine: cc ld.bfd 2.40-14 00:01:33.894 Host machine cpu family: x86_64 00:01:33.894 Host machine cpu: x86_64 00:01:33.894 Message: ## Building in Developer Mode ## 00:01:33.894 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:33.894 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:33.894 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:33.894 Program python3 found: YES (/usr/bin/python3) 00:01:33.894 Program cat found: YES (/usr/bin/cat) 00:01:33.894 Compiler for C supports arguments -march=native: YES 00:01:33.894 Checking for size of "void *" : 8 00:01:33.894 Checking for size of "void *" : 8 (cached) 00:01:33.894 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:33.894 Library m found: YES 00:01:33.894 Library numa found: YES 00:01:33.894 Has header "numaif.h" : YES 00:01:33.894 Library fdt found: NO 00:01:33.894 Library execinfo found: NO 00:01:33.894 Has header "execinfo.h" : YES 00:01:33.894 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:33.894 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:33.894 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:33.894 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:33.894 Run-time dependency openssl found: YES 3.1.1 00:01:33.894 Run-time dependency libpcap found: YES 1.10.4 00:01:33.894 Has header "pcap.h" with dependency libpcap: YES 00:01:33.894 Compiler for C supports arguments -Wcast-qual: YES 00:01:33.894 Compiler for C supports arguments -Wdeprecated: YES 00:01:33.894 Compiler for C supports arguments -Wformat: YES 00:01:33.894 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:33.894 Compiler for C supports arguments -Wformat-security: NO 00:01:33.894 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:33.894 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:33.894 Compiler for C supports arguments -Wnested-externs: YES 00:01:33.894 Compiler for C supports arguments -Wold-style-definition: YES 00:01:33.894 Compiler for C supports arguments -Wpointer-arith: YES 00:01:33.894 Compiler for C supports arguments -Wsign-compare: YES 00:01:33.894 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:33.894 Compiler for C supports arguments -Wundef: YES 00:01:33.894 Compiler for C supports arguments -Wwrite-strings: YES 00:01:33.894 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:33.894 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:33.894 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:33.894 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:33.894 Program objdump found: YES (/usr/bin/objdump) 00:01:33.894 Compiler for C supports arguments -mavx512f: YES 00:01:33.894 Checking if "AVX512 checking" compiles: YES 00:01:33.894 Fetching value of define "__SSE4_2__" : 1 00:01:33.894 Fetching value of define "__AES__" : 1 00:01:33.894 Fetching value of define "__AVX__" : 1 00:01:33.894 Fetching value of define "__AVX2__" : 1 00:01:33.894 Fetching value of define "__AVX512BW__" : 1 00:01:33.894 Fetching value of define "__AVX512CD__" : 1 00:01:33.894 Fetching value of define "__AVX512DQ__" : 1 00:01:33.894 Fetching value of define "__AVX512F__" : 1 00:01:33.894 Fetching value of define "__AVX512VL__" : 1 00:01:33.894 Fetching value of define "__PCLMUL__" : 1 00:01:33.894 Fetching value of define "__RDRND__" : 1 00:01:33.894 Fetching value of define "__RDSEED__" : 1 00:01:33.894 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:33.894 Fetching value of define "__znver1__" : (undefined) 00:01:33.894 Fetching value of define "__znver2__" : (undefined) 00:01:33.894 Fetching value of define "__znver3__" : (undefined) 00:01:33.894 Fetching value of define "__znver4__" : (undefined) 00:01:33.894 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:33.894 Message: lib/log: Defining dependency "log" 00:01:33.894 Message: lib/kvargs: Defining dependency "kvargs" 00:01:33.894 Message: lib/telemetry: Defining dependency "telemetry" 00:01:33.894 Checking for function "getentropy" : NO 00:01:33.894 Message: lib/eal: Defining dependency "eal" 00:01:33.894 Message: lib/ring: Defining dependency "ring" 00:01:33.894 Message: lib/rcu: Defining dependency "rcu" 00:01:33.894 Message: lib/mempool: Defining dependency "mempool" 00:01:33.894 Message: lib/mbuf: Defining dependency "mbuf" 00:01:33.894 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:33.894 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:33.894 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:33.894 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:33.894 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:33.894 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:33.894 Compiler for C supports arguments -mpclmul: YES 00:01:33.894 Compiler for C supports arguments -maes: YES 00:01:33.894 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:33.894 Compiler for C supports arguments -mavx512bw: YES 00:01:33.894 Compiler for C supports arguments -mavx512dq: YES 00:01:33.894 Compiler for C supports arguments -mavx512vl: YES 00:01:33.894 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:33.894 Compiler for C supports arguments -mavx2: YES 00:01:33.894 Compiler for C supports arguments -mavx: YES 00:01:33.894 Message: lib/net: Defining dependency "net" 00:01:33.894 Message: lib/meter: Defining dependency "meter" 00:01:33.894 Message: lib/ethdev: Defining dependency "ethdev" 00:01:33.894 Message: lib/pci: Defining dependency "pci" 00:01:33.894 Message: lib/cmdline: Defining dependency "cmdline" 00:01:33.894 Message: lib/hash: Defining dependency "hash" 00:01:33.894 Message: lib/timer: Defining dependency "timer" 00:01:33.894 Message: lib/compressdev: Defining dependency "compressdev" 00:01:33.894 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:33.894 Message: lib/dmadev: Defining dependency "dmadev" 00:01:33.894 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:33.894 Message: lib/power: Defining dependency "power" 00:01:33.894 Message: lib/reorder: Defining dependency "reorder" 00:01:33.894 Message: lib/security: Defining dependency "security" 00:01:33.894 Has header "linux/userfaultfd.h" : YES 00:01:33.894 Has header "linux/vduse.h" : YES 00:01:33.894 Message: lib/vhost: Defining dependency "vhost" 00:01:33.894 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:33.894 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:33.894 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:33.894 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:33.894 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:33.894 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:33.894 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:33.894 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:33.894 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:33.894 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:33.894 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:33.894 Configuring doxy-api-html.conf using configuration 00:01:33.894 Configuring doxy-api-man.conf using configuration 00:01:33.894 Program mandb found: YES (/usr/bin/mandb) 00:01:33.894 Program sphinx-build found: NO 00:01:33.894 Configuring rte_build_config.h using configuration 00:01:33.894 Message: 00:01:33.894 ================= 00:01:33.894 Applications Enabled 00:01:33.894 ================= 00:01:33.894 00:01:33.894 apps: 00:01:33.894 00:01:33.894 00:01:33.894 Message: 00:01:33.894 ================= 00:01:33.894 Libraries Enabled 00:01:33.894 ================= 00:01:33.894 00:01:33.894 libs: 00:01:33.894 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:33.894 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:33.894 cryptodev, dmadev, power, reorder, security, vhost, 00:01:33.894 00:01:33.894 Message: 00:01:33.894 =============== 00:01:33.894 Drivers Enabled 00:01:33.894 =============== 00:01:33.894 00:01:33.894 common: 00:01:33.894 00:01:33.894 bus: 00:01:33.894 pci, vdev, 00:01:33.894 mempool: 00:01:33.894 ring, 00:01:33.894 dma: 00:01:33.894 00:01:33.894 net: 00:01:33.894 00:01:33.894 crypto: 00:01:33.894 00:01:33.894 compress: 00:01:33.894 00:01:33.894 vdpa: 00:01:33.894 00:01:33.894 00:01:33.894 Message: 00:01:33.894 ================= 00:01:33.894 Content Skipped 00:01:33.894 ================= 00:01:33.894 00:01:33.894 apps: 00:01:33.894 dumpcap: explicitly disabled via build config 00:01:33.894 graph: explicitly disabled via build config 00:01:33.894 pdump: explicitly disabled via build config 00:01:33.894 proc-info: explicitly disabled via build config 00:01:33.894 test-acl: explicitly disabled via build config 00:01:33.894 test-bbdev: explicitly disabled via build config 00:01:33.894 test-cmdline: explicitly disabled via build config 00:01:33.894 test-compress-perf: explicitly disabled via build config 00:01:33.894 test-crypto-perf: explicitly disabled via build config 00:01:33.894 test-dma-perf: explicitly disabled via build config 00:01:33.894 test-eventdev: explicitly disabled via build config 00:01:33.894 test-fib: explicitly disabled via build config 00:01:33.894 test-flow-perf: explicitly disabled via build config 00:01:33.894 test-gpudev: explicitly disabled via build config 00:01:33.894 test-mldev: explicitly disabled via build config 00:01:33.894 test-pipeline: explicitly disabled via build config 00:01:33.894 test-pmd: explicitly disabled via build config 00:01:33.894 test-regex: explicitly disabled via build config 00:01:33.894 test-sad: explicitly disabled via build config 00:01:33.894 test-security-perf: explicitly disabled via build config 00:01:33.894 00:01:33.895 libs: 00:01:33.895 argparse: explicitly disabled via build config 00:01:33.895 metrics: explicitly disabled via build config 00:01:33.895 acl: explicitly disabled via build config 00:01:33.895 bbdev: explicitly disabled via build config 00:01:33.895 bitratestats: explicitly disabled via build config 00:01:33.895 bpf: explicitly disabled via build config 00:01:33.895 cfgfile: explicitly disabled via build config 00:01:33.895 distributor: explicitly disabled via build config 00:01:33.895 efd: explicitly disabled via build config 00:01:33.895 eventdev: explicitly disabled via build config 00:01:33.895 dispatcher: explicitly disabled via build config 00:01:33.895 gpudev: explicitly disabled via build config 00:01:33.895 gro: explicitly disabled via build config 00:01:33.895 gso: explicitly disabled via build config 00:01:33.895 ip_frag: explicitly disabled via build config 00:01:33.895 jobstats: explicitly disabled via build config 00:01:33.895 latencystats: explicitly disabled via build config 00:01:33.895 lpm: explicitly disabled via build config 00:01:33.895 member: explicitly disabled via build config 00:01:33.895 pcapng: explicitly disabled via build config 00:01:33.895 rawdev: explicitly disabled via build config 00:01:33.895 regexdev: explicitly disabled via build config 00:01:33.895 mldev: explicitly disabled via build config 00:01:33.895 rib: explicitly disabled via build config 00:01:33.895 sched: explicitly disabled via build config 00:01:33.895 stack: explicitly disabled via build config 00:01:33.895 ipsec: explicitly disabled via build config 00:01:33.895 pdcp: explicitly disabled via build config 00:01:33.895 fib: explicitly disabled via build config 00:01:33.895 port: explicitly disabled via build config 00:01:33.895 pdump: explicitly disabled via build config 00:01:33.895 table: explicitly disabled via build config 00:01:33.895 pipeline: explicitly disabled via build config 00:01:33.895 graph: explicitly disabled via build config 00:01:33.895 node: explicitly disabled via build config 00:01:33.895 00:01:33.895 drivers: 00:01:33.895 common/cpt: not in enabled drivers build config 00:01:33.895 common/dpaax: not in enabled drivers build config 00:01:33.895 common/iavf: not in enabled drivers build config 00:01:33.895 common/idpf: not in enabled drivers build config 00:01:33.895 common/ionic: not in enabled drivers build config 00:01:33.895 common/mvep: not in enabled drivers build config 00:01:33.895 common/octeontx: not in enabled drivers build config 00:01:33.895 bus/auxiliary: not in enabled drivers build config 00:01:33.895 bus/cdx: not in enabled drivers build config 00:01:33.895 bus/dpaa: not in enabled drivers build config 00:01:33.895 bus/fslmc: not in enabled drivers build config 00:01:33.895 bus/ifpga: not in enabled drivers build config 00:01:33.895 bus/platform: not in enabled drivers build config 00:01:33.895 bus/uacce: not in enabled drivers build config 00:01:33.895 bus/vmbus: not in enabled drivers build config 00:01:33.895 common/cnxk: not in enabled drivers build config 00:01:33.895 common/mlx5: not in enabled drivers build config 00:01:33.895 common/nfp: not in enabled drivers build config 00:01:33.895 common/nitrox: not in enabled drivers build config 00:01:33.895 common/qat: not in enabled drivers build config 00:01:33.895 common/sfc_efx: not in enabled drivers build config 00:01:33.895 mempool/bucket: not in enabled drivers build config 00:01:33.895 mempool/cnxk: not in enabled drivers build config 00:01:33.895 mempool/dpaa: not in enabled drivers build config 00:01:33.895 mempool/dpaa2: not in enabled drivers build config 00:01:33.895 mempool/octeontx: not in enabled drivers build config 00:01:33.895 mempool/stack: not in enabled drivers build config 00:01:33.895 dma/cnxk: not in enabled drivers build config 00:01:33.895 dma/dpaa: not in enabled drivers build config 00:01:33.895 dma/dpaa2: not in enabled drivers build config 00:01:33.895 dma/hisilicon: not in enabled drivers build config 00:01:33.895 dma/idxd: not in enabled drivers build config 00:01:33.895 dma/ioat: not in enabled drivers build config 00:01:33.895 dma/skeleton: not in enabled drivers build config 00:01:33.895 net/af_packet: not in enabled drivers build config 00:01:33.895 net/af_xdp: not in enabled drivers build config 00:01:33.895 net/ark: not in enabled drivers build config 00:01:33.895 net/atlantic: not in enabled drivers build config 00:01:33.895 net/avp: not in enabled drivers build config 00:01:33.895 net/axgbe: not in enabled drivers build config 00:01:33.895 net/bnx2x: not in enabled drivers build config 00:01:33.895 net/bnxt: not in enabled drivers build config 00:01:33.895 net/bonding: not in enabled drivers build config 00:01:33.895 net/cnxk: not in enabled drivers build config 00:01:33.895 net/cpfl: not in enabled drivers build config 00:01:33.895 net/cxgbe: not in enabled drivers build config 00:01:33.895 net/dpaa: not in enabled drivers build config 00:01:33.895 net/dpaa2: not in enabled drivers build config 00:01:33.895 net/e1000: not in enabled drivers build config 00:01:33.895 net/ena: not in enabled drivers build config 00:01:33.895 net/enetc: not in enabled drivers build config 00:01:33.895 net/enetfec: not in enabled drivers build config 00:01:33.895 net/enic: not in enabled drivers build config 00:01:33.895 net/failsafe: not in enabled drivers build config 00:01:33.895 net/fm10k: not in enabled drivers build config 00:01:33.895 net/gve: not in enabled drivers build config 00:01:33.895 net/hinic: not in enabled drivers build config 00:01:33.895 net/hns3: not in enabled drivers build config 00:01:33.895 net/i40e: not in enabled drivers build config 00:01:33.895 net/iavf: not in enabled drivers build config 00:01:33.895 net/ice: not in enabled drivers build config 00:01:33.895 net/idpf: not in enabled drivers build config 00:01:33.895 net/igc: not in enabled drivers build config 00:01:33.895 net/ionic: not in enabled drivers build config 00:01:33.895 net/ipn3ke: not in enabled drivers build config 00:01:33.895 net/ixgbe: not in enabled drivers build config 00:01:33.895 net/mana: not in enabled drivers build config 00:01:33.895 net/memif: not in enabled drivers build config 00:01:33.895 net/mlx4: not in enabled drivers build config 00:01:33.895 net/mlx5: not in enabled drivers build config 00:01:33.895 net/mvneta: not in enabled drivers build config 00:01:33.895 net/mvpp2: not in enabled drivers build config 00:01:33.895 net/netvsc: not in enabled drivers build config 00:01:33.895 net/nfb: not in enabled drivers build config 00:01:33.895 net/nfp: not in enabled drivers build config 00:01:33.895 net/ngbe: not in enabled drivers build config 00:01:33.895 net/null: not in enabled drivers build config 00:01:33.895 net/octeontx: not in enabled drivers build config 00:01:33.895 net/octeon_ep: not in enabled drivers build config 00:01:33.895 net/pcap: not in enabled drivers build config 00:01:33.895 net/pfe: not in enabled drivers build config 00:01:33.895 net/qede: not in enabled drivers build config 00:01:33.895 net/ring: not in enabled drivers build config 00:01:33.895 net/sfc: not in enabled drivers build config 00:01:33.895 net/softnic: not in enabled drivers build config 00:01:33.895 net/tap: not in enabled drivers build config 00:01:33.895 net/thunderx: not in enabled drivers build config 00:01:33.895 net/txgbe: not in enabled drivers build config 00:01:33.895 net/vdev_netvsc: not in enabled drivers build config 00:01:33.895 net/vhost: not in enabled drivers build config 00:01:33.895 net/virtio: not in enabled drivers build config 00:01:33.895 net/vmxnet3: not in enabled drivers build config 00:01:33.895 raw/*: missing internal dependency, "rawdev" 00:01:33.895 crypto/armv8: not in enabled drivers build config 00:01:33.895 crypto/bcmfs: not in enabled drivers build config 00:01:33.895 crypto/caam_jr: not in enabled drivers build config 00:01:33.895 crypto/ccp: not in enabled drivers build config 00:01:33.895 crypto/cnxk: not in enabled drivers build config 00:01:33.895 crypto/dpaa_sec: not in enabled drivers build config 00:01:33.895 crypto/dpaa2_sec: not in enabled drivers build config 00:01:33.895 crypto/ipsec_mb: not in enabled drivers build config 00:01:33.895 crypto/mlx5: not in enabled drivers build config 00:01:33.895 crypto/mvsam: not in enabled drivers build config 00:01:33.895 crypto/nitrox: not in enabled drivers build config 00:01:33.895 crypto/null: not in enabled drivers build config 00:01:33.895 crypto/octeontx: not in enabled drivers build config 00:01:33.895 crypto/openssl: not in enabled drivers build config 00:01:33.895 crypto/scheduler: not in enabled drivers build config 00:01:33.895 crypto/uadk: not in enabled drivers build config 00:01:33.895 crypto/virtio: not in enabled drivers build config 00:01:33.895 compress/isal: not in enabled drivers build config 00:01:33.895 compress/mlx5: not in enabled drivers build config 00:01:33.895 compress/nitrox: not in enabled drivers build config 00:01:33.895 compress/octeontx: not in enabled drivers build config 00:01:33.895 compress/zlib: not in enabled drivers build config 00:01:33.895 regex/*: missing internal dependency, "regexdev" 00:01:33.895 ml/*: missing internal dependency, "mldev" 00:01:33.895 vdpa/ifc: not in enabled drivers build config 00:01:33.895 vdpa/mlx5: not in enabled drivers build config 00:01:33.895 vdpa/nfp: not in enabled drivers build config 00:01:33.895 vdpa/sfc: not in enabled drivers build config 00:01:33.895 event/*: missing internal dependency, "eventdev" 00:01:33.895 baseband/*: missing internal dependency, "bbdev" 00:01:33.895 gpu/*: missing internal dependency, "gpudev" 00:01:33.895 00:01:33.895 00:01:34.468 Build targets in project: 84 00:01:34.468 00:01:34.468 DPDK 24.03.0 00:01:34.468 00:01:34.468 User defined options 00:01:34.468 buildtype : debug 00:01:34.468 default_library : shared 00:01:34.468 libdir : lib 00:01:34.468 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:34.468 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:34.468 c_link_args : 00:01:34.468 cpu_instruction_set: native 00:01:34.468 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:34.468 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:34.468 enable_docs : false 00:01:34.468 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:34.468 enable_kmods : false 00:01:34.468 max_lcores : 128 00:01:34.468 tests : false 00:01:34.468 00:01:34.468 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:34.734 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:35.004 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:35.004 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:35.004 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:35.004 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:35.004 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:35.004 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:35.004 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:35.004 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:35.004 [9/267] Linking static target lib/librte_kvargs.a 00:01:35.004 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:35.004 [11/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:35.004 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:35.004 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:35.004 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:35.004 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:35.004 [16/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:35.004 [17/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:35.004 [18/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:35.004 [19/267] Linking static target lib/librte_log.a 00:01:35.004 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:35.004 [21/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:35.004 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:35.004 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:35.004 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:35.004 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:35.004 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:35.004 [27/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:35.004 [28/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:35.004 [29/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:35.263 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:35.263 [31/267] Linking static target lib/librte_pci.a 00:01:35.263 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:35.263 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:35.263 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:35.263 [35/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:35.263 [36/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:35.263 [37/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:35.263 [38/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:35.263 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:35.263 [40/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.263 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:35.524 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:35.524 [43/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.524 [44/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:35.524 [45/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:35.524 [46/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:35.524 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:35.524 [48/267] Linking static target lib/librte_meter.a 00:01:35.524 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:35.524 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:35.524 [51/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:35.524 [52/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:35.524 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:35.524 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:35.524 [55/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:35.524 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:35.524 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:35.524 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:35.524 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:35.524 [60/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:35.524 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:35.524 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:35.524 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:35.524 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:35.524 [65/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:35.524 [66/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:35.524 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:35.524 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:35.524 [69/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:35.524 [70/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:35.524 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:35.524 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:35.524 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:35.524 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:35.524 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:35.524 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:35.524 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:35.524 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:35.524 [79/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:35.524 [80/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:35.524 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:35.524 [82/267] Linking static target lib/librte_telemetry.a 00:01:35.524 [83/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:35.524 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:35.524 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:35.524 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:35.524 [87/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:35.524 [88/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:35.524 [89/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:35.524 [90/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:35.524 [91/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:35.524 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:35.524 [93/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:35.524 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:35.524 [95/267] Linking static target lib/librte_dmadev.a 00:01:35.524 [96/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:35.524 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:35.524 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:35.524 [99/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:35.524 [100/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:35.524 [101/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:35.524 [102/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:35.524 [103/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:35.524 [104/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:35.524 [105/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:35.524 [106/267] Linking static target lib/librte_timer.a 00:01:35.524 [107/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:35.524 [108/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:35.524 [109/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:35.524 [110/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:35.524 [111/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:35.524 [112/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:35.525 [113/267] Linking static target lib/librte_ring.a 00:01:35.525 [114/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:35.525 [115/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:35.525 [116/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:35.525 [117/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:35.525 [118/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:35.525 [119/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:35.525 [120/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:35.525 [121/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:35.525 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:35.525 [123/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:35.525 [124/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:35.525 [125/267] Linking static target lib/librte_cmdline.a 00:01:35.525 [126/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:35.525 [127/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:35.525 [128/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:35.525 [129/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:35.525 [130/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:35.525 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:35.525 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:35.525 [133/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:35.525 [134/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:35.525 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:35.525 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:35.525 [137/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:35.525 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:35.525 [139/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.525 [140/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:35.525 [141/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:35.525 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:35.525 [143/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:35.525 [144/267] Linking static target lib/librte_net.a 00:01:35.525 [145/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:35.525 [146/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:35.525 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:35.525 [148/267] Linking static target lib/librte_compressdev.a 00:01:35.787 [149/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:35.787 [150/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:35.787 [151/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:35.787 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:35.787 [153/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:35.787 [154/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:35.787 [155/267] Linking target lib/librte_log.so.24.1 00:01:35.787 [156/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:35.787 [157/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:35.787 [158/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:35.787 [159/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.787 [160/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:35.787 [161/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:35.787 [162/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:35.787 [163/267] Linking static target lib/librte_rcu.a 00:01:35.787 [164/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:35.787 [165/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:35.787 [166/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:35.787 [167/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:35.787 [168/267] Linking static target lib/librte_security.a 00:01:35.787 [169/267] Linking static target lib/librte_eal.a 00:01:35.787 [170/267] Linking static target lib/librte_mempool.a 00:01:35.787 [171/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:35.787 [172/267] Linking static target lib/librte_reorder.a 00:01:35.787 [173/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:35.787 [174/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:35.787 [175/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:35.787 [176/267] Linking static target lib/librte_power.a 00:01:35.787 [177/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:35.787 [178/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:35.787 [179/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:35.787 [180/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:35.787 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:35.787 [182/267] Linking static target drivers/librte_bus_vdev.a 00:01:35.787 [183/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:35.787 [184/267] Linking target lib/librte_kvargs.so.24.1 00:01:35.787 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:35.787 [186/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:35.787 [187/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:35.787 [188/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:35.787 [189/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:35.787 [190/267] Linking static target lib/librte_hash.a 00:01:35.787 [191/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:36.048 [192/267] Linking static target lib/librte_mbuf.a 00:01:36.048 [193/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:36.048 [194/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:36.048 [195/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:36.048 [196/267] Linking static target drivers/librte_bus_pci.a 00:01:36.048 [197/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.048 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:36.048 [199/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:36.048 [200/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.048 [201/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:36.048 [202/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.048 [203/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:36.048 [204/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:36.048 [205/267] Linking static target drivers/librte_mempool_ring.a 00:01:36.048 [206/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:36.048 [207/267] Linking static target lib/librte_cryptodev.a 00:01:36.048 [208/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.048 [209/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.048 [210/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:36.048 [211/267] Linking target lib/librte_telemetry.so.24.1 00:01:36.309 [212/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.309 [213/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.309 [214/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.309 [215/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:36.309 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.570 [217/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.570 [218/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:36.570 [219/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:36.570 [220/267] Linking static target lib/librte_ethdev.a 00:01:36.829 [221/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.829 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.829 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.829 [224/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.829 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.090 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.660 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:37.660 [228/267] Linking static target lib/librte_vhost.a 00:01:38.230 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.620 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.210 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.591 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.591 [233/267] Linking target lib/librte_eal.so.24.1 00:01:47.591 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:47.591 [235/267] Linking target lib/librte_meter.so.24.1 00:01:47.591 [236/267] Linking target lib/librte_ring.so.24.1 00:01:47.591 [237/267] Linking target lib/librte_dmadev.so.24.1 00:01:47.591 [238/267] Linking target lib/librte_pci.so.24.1 00:01:47.591 [239/267] Linking target lib/librte_timer.so.24.1 00:01:47.591 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:47.591 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:47.591 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:47.591 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:47.591 [244/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:47.852 [245/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:47.852 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:47.852 [247/267] Linking target lib/librte_rcu.so.24.1 00:01:47.852 [248/267] Linking target lib/librte_mempool.so.24.1 00:01:47.852 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:47.852 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:47.852 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:47.852 [252/267] Linking target lib/librte_mbuf.so.24.1 00:01:48.113 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:48.113 [254/267] Linking target lib/librte_net.so.24.1 00:01:48.113 [255/267] Linking target lib/librte_cryptodev.so.24.1 00:01:48.113 [256/267] Linking target lib/librte_reorder.so.24.1 00:01:48.113 [257/267] Linking target lib/librte_compressdev.so.24.1 00:01:48.113 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:48.374 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:48.374 [260/267] Linking target lib/librte_security.so.24.1 00:01:48.374 [261/267] Linking target lib/librte_cmdline.so.24.1 00:01:48.374 [262/267] Linking target lib/librte_hash.so.24.1 00:01:48.374 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:48.374 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:48.374 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:48.636 [266/267] Linking target lib/librte_power.so.24.1 00:01:48.636 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:48.636 INFO: autodetecting backend as ninja 00:01:48.636 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:52.842 CC lib/ut_mock/mock.o 00:01:52.842 CC lib/log/log.o 00:01:52.842 CC lib/log/log_flags.o 00:01:52.842 CC lib/log/log_deprecated.o 00:01:52.842 CC lib/ut/ut.o 00:01:52.842 LIB libspdk_ut_mock.a 00:01:52.842 LIB libspdk_ut.a 00:01:52.842 LIB libspdk_log.a 00:01:52.842 SO libspdk_ut_mock.so.6.0 00:01:52.842 SO libspdk_ut.so.2.0 00:01:52.842 SO libspdk_log.so.7.1 00:01:52.842 SYMLINK libspdk_ut_mock.so 00:01:52.842 SYMLINK libspdk_ut.so 00:01:52.842 SYMLINK libspdk_log.so 00:01:52.842 CC lib/dma/dma.o 00:01:52.842 CC lib/util/base64.o 00:01:52.842 CC lib/util/bit_array.o 00:01:52.842 CC lib/util/cpuset.o 00:01:52.842 CC lib/ae4dma/ae4dma.o 00:01:52.842 CC lib/util/crc16.o 00:01:52.842 CC lib/util/crc32.o 00:01:52.842 CC lib/util/crc32c.o 00:01:52.842 CC lib/ioat/ioat.o 00:01:52.842 CC lib/util/crc32_ieee.o 00:01:52.842 CXX lib/trace_parser/trace.o 00:01:52.842 CC lib/util/crc64.o 00:01:52.842 CC lib/util/dif.o 00:01:52.842 CC lib/util/fd.o 00:01:52.842 CC lib/util/fd_group.o 00:01:52.842 CC lib/util/file.o 00:01:52.842 CC lib/util/hexlify.o 00:01:52.842 CC lib/util/iov.o 00:01:52.842 CC lib/util/math.o 00:01:52.842 CC lib/util/net.o 00:01:52.842 CC lib/util/pipe.o 00:01:52.842 CC lib/util/strerror_tls.o 00:01:52.842 CC lib/util/string.o 00:01:52.842 CC lib/util/uuid.o 00:01:52.842 CC lib/util/xor.o 00:01:52.842 CC lib/util/zipf.o 00:01:52.842 CC lib/util/md5.o 00:01:53.103 CC lib/vfio_user/host/vfio_user_pci.o 00:01:53.103 CC lib/vfio_user/host/vfio_user.o 00:01:53.103 LIB libspdk_dma.a 00:01:53.103 LIB libspdk_ae4dma.a 00:01:53.103 SO libspdk_dma.so.5.0 00:01:53.103 SO libspdk_ae4dma.so.1.0 00:01:53.103 SYMLINK libspdk_dma.so 00:01:53.103 LIB libspdk_ioat.a 00:01:53.103 SO libspdk_ioat.so.7.0 00:01:53.103 SYMLINK libspdk_ae4dma.so 00:01:53.103 SYMLINK libspdk_ioat.so 00:01:53.364 LIB libspdk_vfio_user.a 00:01:53.364 SO libspdk_vfio_user.so.5.0 00:01:53.364 LIB libspdk_util.a 00:01:53.364 SYMLINK libspdk_vfio_user.so 00:01:53.364 SO libspdk_util.so.10.0 00:01:53.626 SYMLINK libspdk_util.so 00:01:53.626 LIB libspdk_trace_parser.a 00:01:53.626 SO libspdk_trace_parser.so.6.0 00:01:53.888 SYMLINK libspdk_trace_parser.so 00:01:53.888 CC lib/vmd/vmd.o 00:01:53.888 CC lib/idxd/idxd.o 00:01:53.888 CC lib/json/json_parse.o 00:01:53.888 CC lib/rdma_utils/rdma_utils.o 00:01:53.888 CC lib/rdma_provider/common.o 00:01:53.888 CC lib/vmd/led.o 00:01:53.888 CC lib/conf/conf.o 00:01:53.888 CC lib/idxd/idxd_user.o 00:01:53.888 CC lib/json/json_util.o 00:01:53.888 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:53.888 CC lib/idxd/idxd_kernel.o 00:01:53.888 CC lib/json/json_write.o 00:01:53.888 CC lib/env_dpdk/env.o 00:01:53.888 CC lib/env_dpdk/memory.o 00:01:53.888 CC lib/env_dpdk/pci.o 00:01:53.888 CC lib/env_dpdk/init.o 00:01:53.888 CC lib/env_dpdk/threads.o 00:01:53.888 CC lib/env_dpdk/pci_ioat.o 00:01:53.888 CC lib/env_dpdk/pci_virtio.o 00:01:53.888 CC lib/env_dpdk/pci_vmd.o 00:01:53.888 CC lib/env_dpdk/pci_idxd.o 00:01:53.888 CC lib/env_dpdk/pci_ae4dma.o 00:01:53.888 CC lib/env_dpdk/pci_event.o 00:01:53.888 CC lib/env_dpdk/sigbus_handler.o 00:01:53.888 CC lib/env_dpdk/pci_dpdk.o 00:01:53.888 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:53.888 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:54.150 LIB libspdk_rdma_provider.a 00:01:54.150 LIB libspdk_conf.a 00:01:54.150 SO libspdk_rdma_provider.so.6.0 00:01:54.150 SO libspdk_conf.so.6.0 00:01:54.150 LIB libspdk_rdma_utils.a 00:01:54.150 LIB libspdk_json.a 00:01:54.411 SYMLINK libspdk_rdma_provider.so 00:01:54.411 SO libspdk_rdma_utils.so.1.0 00:01:54.411 SO libspdk_json.so.6.0 00:01:54.411 SYMLINK libspdk_conf.so 00:01:54.412 SYMLINK libspdk_rdma_utils.so 00:01:54.412 SYMLINK libspdk_json.so 00:01:54.412 LIB libspdk_idxd.a 00:01:54.673 SO libspdk_idxd.so.12.1 00:01:54.673 LIB libspdk_vmd.a 00:01:54.673 SO libspdk_vmd.so.6.0 00:01:54.673 SYMLINK libspdk_idxd.so 00:01:54.673 SYMLINK libspdk_vmd.so 00:01:54.673 CC lib/jsonrpc/jsonrpc_server.o 00:01:54.673 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:54.673 CC lib/jsonrpc/jsonrpc_client.o 00:01:54.673 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:54.934 LIB libspdk_jsonrpc.a 00:01:54.934 SO libspdk_jsonrpc.so.6.0 00:01:55.195 SYMLINK libspdk_jsonrpc.so 00:01:55.195 LIB libspdk_env_dpdk.a 00:01:55.195 SO libspdk_env_dpdk.so.15.1 00:01:55.457 SYMLINK libspdk_env_dpdk.so 00:01:55.457 CC lib/rpc/rpc.o 00:01:55.718 LIB libspdk_rpc.a 00:01:55.718 SO libspdk_rpc.so.6.0 00:01:55.718 SYMLINK libspdk_rpc.so 00:01:56.290 CC lib/trace/trace.o 00:01:56.290 CC lib/notify/notify.o 00:01:56.290 CC lib/trace/trace_flags.o 00:01:56.290 CC lib/keyring/keyring.o 00:01:56.290 CC lib/notify/notify_rpc.o 00:01:56.290 CC lib/trace/trace_rpc.o 00:01:56.290 CC lib/keyring/keyring_rpc.o 00:01:56.290 LIB libspdk_notify.a 00:01:56.290 SO libspdk_notify.so.6.0 00:01:56.550 LIB libspdk_keyring.a 00:01:56.551 LIB libspdk_trace.a 00:01:56.551 SO libspdk_keyring.so.2.0 00:01:56.551 SYMLINK libspdk_notify.so 00:01:56.551 SO libspdk_trace.so.11.0 00:01:56.551 SYMLINK libspdk_keyring.so 00:01:56.551 SYMLINK libspdk_trace.so 00:01:56.813 CC lib/sock/sock.o 00:01:56.813 CC lib/sock/sock_rpc.o 00:01:56.813 CC lib/thread/thread.o 00:01:56.813 CC lib/thread/iobuf.o 00:01:57.385 LIB libspdk_sock.a 00:01:57.385 SO libspdk_sock.so.10.0 00:01:57.385 SYMLINK libspdk_sock.so 00:01:57.958 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:57.958 CC lib/nvme/nvme_ctrlr.o 00:01:57.958 CC lib/nvme/nvme_fabric.o 00:01:57.958 CC lib/nvme/nvme_ns_cmd.o 00:01:57.958 CC lib/nvme/nvme_ns.o 00:01:57.958 CC lib/nvme/nvme_pcie_common.o 00:01:57.958 CC lib/nvme/nvme_pcie.o 00:01:57.958 CC lib/nvme/nvme_qpair.o 00:01:57.958 CC lib/nvme/nvme.o 00:01:57.958 CC lib/nvme/nvme_quirks.o 00:01:57.958 CC lib/nvme/nvme_transport.o 00:01:57.958 CC lib/nvme/nvme_discovery.o 00:01:57.958 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:57.958 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:57.958 CC lib/nvme/nvme_tcp.o 00:01:57.958 CC lib/nvme/nvme_opal.o 00:01:57.958 CC lib/nvme/nvme_io_msg.o 00:01:57.958 CC lib/nvme/nvme_poll_group.o 00:01:57.958 CC lib/nvme/nvme_zns.o 00:01:57.958 CC lib/nvme/nvme_stubs.o 00:01:57.958 CC lib/nvme/nvme_auth.o 00:01:57.958 CC lib/nvme/nvme_cuse.o 00:01:57.958 CC lib/nvme/nvme_vfio_user.o 00:01:57.958 CC lib/nvme/nvme_rdma.o 00:01:58.220 LIB libspdk_thread.a 00:01:58.220 SO libspdk_thread.so.11.0 00:01:58.481 SYMLINK libspdk_thread.so 00:01:58.744 CC lib/accel/accel.o 00:01:58.744 CC lib/accel/accel_rpc.o 00:01:58.744 CC lib/accel/accel_sw.o 00:01:58.744 CC lib/init/json_config.o 00:01:58.744 CC lib/init/subsystem.o 00:01:58.744 CC lib/init/subsystem_rpc.o 00:01:58.744 CC lib/fsdev/fsdev.o 00:01:58.744 CC lib/init/rpc.o 00:01:58.744 CC lib/vfu_tgt/tgt_endpoint.o 00:01:58.744 CC lib/fsdev/fsdev_io.o 00:01:58.744 CC lib/vfu_tgt/tgt_rpc.o 00:01:58.744 CC lib/fsdev/fsdev_rpc.o 00:01:58.744 CC lib/virtio/virtio.o 00:01:58.744 CC lib/virtio/virtio_vhost_user.o 00:01:58.744 CC lib/blob/blobstore.o 00:01:58.744 CC lib/blob/zeroes.o 00:01:58.744 CC lib/virtio/virtio_vfio_user.o 00:01:58.744 CC lib/blob/request.o 00:01:58.744 CC lib/virtio/virtio_pci.o 00:01:58.744 CC lib/blob/blob_bs_dev.o 00:01:59.005 LIB libspdk_init.a 00:01:59.005 SO libspdk_init.so.6.0 00:01:59.005 LIB libspdk_virtio.a 00:01:59.005 LIB libspdk_vfu_tgt.a 00:01:59.267 SYMLINK libspdk_init.so 00:01:59.267 SO libspdk_virtio.so.7.0 00:01:59.267 SO libspdk_vfu_tgt.so.3.0 00:01:59.267 SYMLINK libspdk_virtio.so 00:01:59.267 SYMLINK libspdk_vfu_tgt.so 00:01:59.267 LIB libspdk_fsdev.a 00:01:59.529 SO libspdk_fsdev.so.2.0 00:01:59.529 SYMLINK libspdk_fsdev.so 00:01:59.529 CC lib/event/app.o 00:01:59.529 CC lib/event/reactor.o 00:01:59.529 CC lib/event/log_rpc.o 00:01:59.529 CC lib/event/app_rpc.o 00:01:59.529 CC lib/event/scheduler_static.o 00:01:59.791 LIB libspdk_accel.a 00:01:59.791 LIB libspdk_nvme.a 00:01:59.791 SO libspdk_accel.so.16.0 00:01:59.791 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:01:59.791 SYMLINK libspdk_accel.so 00:01:59.791 SO libspdk_nvme.so.14.1 00:02:00.052 LIB libspdk_event.a 00:02:00.052 SO libspdk_event.so.14.0 00:02:00.052 SYMLINK libspdk_event.so 00:02:00.052 SYMLINK libspdk_nvme.so 00:02:00.313 CC lib/bdev/bdev.o 00:02:00.313 CC lib/bdev/bdev_rpc.o 00:02:00.313 CC lib/bdev/bdev_zone.o 00:02:00.313 CC lib/bdev/part.o 00:02:00.313 CC lib/bdev/scsi_nvme.o 00:02:00.313 LIB libspdk_fuse_dispatcher.a 00:02:00.574 SO libspdk_fuse_dispatcher.so.1.0 00:02:00.574 SYMLINK libspdk_fuse_dispatcher.so 00:02:01.518 LIB libspdk_blob.a 00:02:01.519 SO libspdk_blob.so.11.0 00:02:01.519 SYMLINK libspdk_blob.so 00:02:01.782 CC lib/lvol/lvol.o 00:02:01.782 CC lib/blobfs/blobfs.o 00:02:01.782 CC lib/blobfs/tree.o 00:02:02.728 LIB libspdk_bdev.a 00:02:02.728 SO libspdk_bdev.so.17.0 00:02:02.728 LIB libspdk_blobfs.a 00:02:02.728 SO libspdk_blobfs.so.10.0 00:02:02.728 SYMLINK libspdk_bdev.so 00:02:02.728 LIB libspdk_lvol.a 00:02:02.728 SYMLINK libspdk_blobfs.so 00:02:02.728 SO libspdk_lvol.so.10.0 00:02:02.991 SYMLINK libspdk_lvol.so 00:02:02.991 CC lib/nbd/nbd.o 00:02:02.991 CC lib/nbd/nbd_rpc.o 00:02:02.991 CC lib/ublk/ublk.o 00:02:02.991 CC lib/ublk/ublk_rpc.o 00:02:02.991 CC lib/nvmf/ctrlr.o 00:02:02.991 CC lib/nvmf/ctrlr_discovery.o 00:02:02.991 CC lib/scsi/dev.o 00:02:02.991 CC lib/nvmf/ctrlr_bdev.o 00:02:02.991 CC lib/scsi/lun.o 00:02:02.991 CC lib/nvmf/subsystem.o 00:02:02.991 CC lib/scsi/port.o 00:02:02.991 CC lib/scsi/scsi.o 00:02:02.991 CC lib/nvmf/nvmf.o 00:02:02.991 CC lib/scsi/scsi_bdev.o 00:02:02.991 CC lib/nvmf/nvmf_rpc.o 00:02:02.991 CC lib/scsi/scsi_pr.o 00:02:02.991 CC lib/nvmf/transport.o 00:02:02.991 CC lib/scsi/scsi_rpc.o 00:02:02.991 CC lib/nvmf/tcp.o 00:02:02.991 CC lib/nvmf/stubs.o 00:02:02.991 CC lib/scsi/task.o 00:02:02.991 CC lib/ftl/ftl_core.o 00:02:02.991 CC lib/nvmf/mdns_server.o 00:02:02.991 CC lib/ftl/ftl_init.o 00:02:02.991 CC lib/nvmf/vfio_user.o 00:02:02.991 CC lib/ftl/ftl_layout.o 00:02:02.991 CC lib/nvmf/rdma.o 00:02:02.991 CC lib/ftl/ftl_debug.o 00:02:02.991 CC lib/nvmf/auth.o 00:02:02.991 CC lib/ftl/ftl_io.o 00:02:02.991 CC lib/ftl/ftl_sb.o 00:02:02.991 CC lib/ftl/ftl_l2p.o 00:02:02.991 CC lib/ftl/ftl_l2p_flat.o 00:02:02.991 CC lib/ftl/ftl_nv_cache.o 00:02:02.991 CC lib/ftl/ftl_band.o 00:02:02.991 CC lib/ftl/ftl_band_ops.o 00:02:02.991 CC lib/ftl/ftl_writer.o 00:02:02.991 CC lib/ftl/ftl_rq.o 00:02:02.991 CC lib/ftl/ftl_reloc.o 00:02:02.991 CC lib/ftl/ftl_l2p_cache.o 00:02:02.991 CC lib/ftl/ftl_p2l.o 00:02:02.991 CC lib/ftl/ftl_p2l_log.o 00:02:03.250 CC lib/ftl/mngt/ftl_mngt.o 00:02:03.250 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:03.250 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:03.250 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:03.250 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:03.250 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:03.250 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:03.250 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:03.250 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:03.250 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:03.250 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:03.250 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:03.250 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:03.250 CC lib/ftl/utils/ftl_conf.o 00:02:03.250 CC lib/ftl/utils/ftl_md.o 00:02:03.250 CC lib/ftl/utils/ftl_mempool.o 00:02:03.250 CC lib/ftl/utils/ftl_property.o 00:02:03.250 CC lib/ftl/utils/ftl_bitmap.o 00:02:03.250 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:03.250 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:03.250 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:03.250 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:03.250 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:03.250 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:03.250 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:03.250 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:03.250 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:03.250 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:03.250 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:03.250 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:03.250 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:03.250 CC lib/ftl/base/ftl_base_dev.o 00:02:03.250 CC lib/ftl/base/ftl_base_bdev.o 00:02:03.250 CC lib/ftl/ftl_trace.o 00:02:03.511 LIB libspdk_nbd.a 00:02:03.511 SO libspdk_nbd.so.7.0 00:02:03.773 LIB libspdk_scsi.a 00:02:03.773 SYMLINK libspdk_nbd.so 00:02:03.773 SO libspdk_scsi.so.9.0 00:02:03.773 LIB libspdk_ublk.a 00:02:03.773 SYMLINK libspdk_scsi.so 00:02:03.773 SO libspdk_ublk.so.3.0 00:02:04.036 SYMLINK libspdk_ublk.so 00:02:04.036 LIB libspdk_ftl.a 00:02:04.036 CC lib/vhost/vhost.o 00:02:04.036 CC lib/vhost/vhost_rpc.o 00:02:04.299 CC lib/vhost/vhost_scsi.o 00:02:04.299 CC lib/vhost/vhost_blk.o 00:02:04.299 CC lib/iscsi/conn.o 00:02:04.299 CC lib/vhost/rte_vhost_user.o 00:02:04.299 CC lib/iscsi/init_grp.o 00:02:04.299 CC lib/iscsi/iscsi.o 00:02:04.299 CC lib/iscsi/param.o 00:02:04.299 CC lib/iscsi/portal_grp.o 00:02:04.299 CC lib/iscsi/tgt_node.o 00:02:04.299 CC lib/iscsi/iscsi_subsystem.o 00:02:04.299 CC lib/iscsi/iscsi_rpc.o 00:02:04.299 CC lib/iscsi/task.o 00:02:04.299 SO libspdk_ftl.so.9.0 00:02:04.561 SYMLINK libspdk_ftl.so 00:02:05.133 LIB libspdk_nvmf.a 00:02:05.133 SO libspdk_nvmf.so.20.0 00:02:05.133 LIB libspdk_vhost.a 00:02:05.133 SO libspdk_vhost.so.8.0 00:02:05.394 SYMLINK libspdk_nvmf.so 00:02:05.394 SYMLINK libspdk_vhost.so 00:02:05.394 LIB libspdk_iscsi.a 00:02:05.394 SO libspdk_iscsi.so.8.0 00:02:05.655 SYMLINK libspdk_iscsi.so 00:02:06.228 CC module/env_dpdk/env_dpdk_rpc.o 00:02:06.228 CC module/vfu_device/vfu_virtio.o 00:02:06.228 CC module/vfu_device/vfu_virtio_blk.o 00:02:06.228 CC module/vfu_device/vfu_virtio_scsi.o 00:02:06.228 CC module/vfu_device/vfu_virtio_rpc.o 00:02:06.228 CC module/vfu_device/vfu_virtio_fs.o 00:02:06.228 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:06.228 CC module/sock/posix/posix.o 00:02:06.228 LIB libspdk_env_dpdk_rpc.a 00:02:06.228 CC module/blob/bdev/blob_bdev.o 00:02:06.228 CC module/scheduler/gscheduler/gscheduler.o 00:02:06.228 CC module/accel/dsa/accel_dsa.o 00:02:06.228 CC module/accel/error/accel_error.o 00:02:06.228 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:06.228 CC module/accel/dsa/accel_dsa_rpc.o 00:02:06.228 CC module/accel/error/accel_error_rpc.o 00:02:06.489 CC module/accel/iaa/accel_iaa.o 00:02:06.489 CC module/accel/iaa/accel_iaa_rpc.o 00:02:06.489 CC module/accel/ae4dma/accel_ae4dma.o 00:02:06.489 CC module/accel/ioat/accel_ioat.o 00:02:06.489 CC module/keyring/file/keyring.o 00:02:06.489 CC module/accel/ae4dma/accel_ae4dma_rpc.o 00:02:06.489 CC module/accel/ioat/accel_ioat_rpc.o 00:02:06.489 CC module/keyring/file/keyring_rpc.o 00:02:06.489 CC module/fsdev/aio/fsdev_aio.o 00:02:06.489 CC module/keyring/linux/keyring.o 00:02:06.489 CC module/keyring/linux/keyring_rpc.o 00:02:06.489 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:06.489 CC module/fsdev/aio/linux_aio_mgr.o 00:02:06.489 SO libspdk_env_dpdk_rpc.so.6.0 00:02:06.489 SYMLINK libspdk_env_dpdk_rpc.so 00:02:06.489 LIB libspdk_keyring_file.a 00:02:06.489 LIB libspdk_keyring_linux.a 00:02:06.489 LIB libspdk_scheduler_dpdk_governor.a 00:02:06.489 LIB libspdk_scheduler_gscheduler.a 00:02:06.489 SO libspdk_scheduler_gscheduler.so.4.0 00:02:06.489 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:06.489 LIB libspdk_accel_error.a 00:02:06.489 SO libspdk_keyring_file.so.2.0 00:02:06.489 SO libspdk_keyring_linux.so.1.0 00:02:06.489 LIB libspdk_accel_ioat.a 00:02:06.489 LIB libspdk_scheduler_dynamic.a 00:02:06.489 LIB libspdk_accel_ae4dma.a 00:02:06.750 LIB libspdk_accel_iaa.a 00:02:06.750 SO libspdk_accel_error.so.2.0 00:02:06.750 SO libspdk_accel_ioat.so.6.0 00:02:06.750 SO libspdk_scheduler_dynamic.so.4.0 00:02:06.750 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:06.750 SO libspdk_accel_ae4dma.so.1.0 00:02:06.750 SYMLINK libspdk_scheduler_gscheduler.so 00:02:06.750 LIB libspdk_blob_bdev.a 00:02:06.750 SO libspdk_accel_iaa.so.3.0 00:02:06.750 SYMLINK libspdk_keyring_file.so 00:02:06.750 LIB libspdk_accel_dsa.a 00:02:06.750 SYMLINK libspdk_keyring_linux.so 00:02:06.750 SYMLINK libspdk_accel_error.so 00:02:06.750 SYMLINK libspdk_accel_ioat.so 00:02:06.750 SO libspdk_blob_bdev.so.11.0 00:02:06.750 SO libspdk_accel_dsa.so.5.0 00:02:06.750 SYMLINK libspdk_scheduler_dynamic.so 00:02:06.750 SYMLINK libspdk_accel_ae4dma.so 00:02:06.750 SYMLINK libspdk_accel_iaa.so 00:02:06.750 LIB libspdk_vfu_device.a 00:02:06.750 SYMLINK libspdk_blob_bdev.so 00:02:06.750 SYMLINK libspdk_accel_dsa.so 00:02:06.750 SO libspdk_vfu_device.so.3.0 00:02:07.012 SYMLINK libspdk_vfu_device.so 00:02:07.012 LIB libspdk_fsdev_aio.a 00:02:07.012 LIB libspdk_sock_posix.a 00:02:07.012 SO libspdk_fsdev_aio.so.1.0 00:02:07.012 SO libspdk_sock_posix.so.6.0 00:02:07.272 SYMLINK libspdk_fsdev_aio.so 00:02:07.272 SYMLINK libspdk_sock_posix.so 00:02:07.272 CC module/bdev/nvme/bdev_nvme.o 00:02:07.272 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:07.272 CC module/bdev/nvme/nvme_rpc.o 00:02:07.272 CC module/bdev/nvme/vbdev_opal.o 00:02:07.272 CC module/bdev/nvme/bdev_mdns_client.o 00:02:07.272 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:07.272 CC module/bdev/gpt/gpt.o 00:02:07.272 CC module/bdev/lvol/vbdev_lvol.o 00:02:07.272 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:07.272 CC module/blobfs/bdev/blobfs_bdev.o 00:02:07.273 CC module/bdev/error/vbdev_error.o 00:02:07.273 CC module/bdev/aio/bdev_aio.o 00:02:07.273 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:07.273 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:07.273 CC module/bdev/gpt/vbdev_gpt.o 00:02:07.273 CC module/bdev/error/vbdev_error_rpc.o 00:02:07.273 CC module/bdev/aio/bdev_aio_rpc.o 00:02:07.273 CC module/bdev/null/bdev_null.o 00:02:07.273 CC module/bdev/null/bdev_null_rpc.o 00:02:07.273 CC module/bdev/delay/vbdev_delay.o 00:02:07.273 CC module/bdev/iscsi/bdev_iscsi.o 00:02:07.273 CC module/bdev/malloc/bdev_malloc.o 00:02:07.273 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:07.273 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:07.273 CC module/bdev/ftl/bdev_ftl.o 00:02:07.273 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:07.273 CC module/bdev/passthru/vbdev_passthru.o 00:02:07.273 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:07.273 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:07.273 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:07.273 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:07.273 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:07.273 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:07.273 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:07.273 CC module/bdev/raid/bdev_raid.o 00:02:07.273 CC module/bdev/raid/bdev_raid_rpc.o 00:02:07.273 CC module/bdev/raid/bdev_raid_sb.o 00:02:07.273 CC module/bdev/raid/raid0.o 00:02:07.273 CC module/bdev/raid/raid1.o 00:02:07.273 CC module/bdev/split/vbdev_split.o 00:02:07.273 CC module/bdev/split/vbdev_split_rpc.o 00:02:07.273 CC module/bdev/raid/concat.o 00:02:07.533 LIB libspdk_blobfs_bdev.a 00:02:07.533 SO libspdk_blobfs_bdev.so.6.0 00:02:07.795 LIB libspdk_bdev_split.a 00:02:07.795 LIB libspdk_bdev_error.a 00:02:07.795 LIB libspdk_bdev_gpt.a 00:02:07.795 SYMLINK libspdk_blobfs_bdev.so 00:02:07.795 LIB libspdk_bdev_null.a 00:02:07.795 SO libspdk_bdev_error.so.6.0 00:02:07.795 SO libspdk_bdev_split.so.6.0 00:02:07.795 SO libspdk_bdev_gpt.so.6.0 00:02:07.795 LIB libspdk_bdev_passthru.a 00:02:07.795 LIB libspdk_bdev_ftl.a 00:02:07.795 SO libspdk_bdev_null.so.6.0 00:02:07.795 LIB libspdk_bdev_aio.a 00:02:07.795 SO libspdk_bdev_passthru.so.6.0 00:02:07.795 SO libspdk_bdev_ftl.so.6.0 00:02:07.795 SYMLINK libspdk_bdev_error.so 00:02:07.795 LIB libspdk_bdev_zone_block.a 00:02:07.795 SYMLINK libspdk_bdev_split.so 00:02:07.795 SO libspdk_bdev_aio.so.6.0 00:02:07.795 SYMLINK libspdk_bdev_gpt.so 00:02:07.795 LIB libspdk_bdev_delay.a 00:02:07.795 SYMLINK libspdk_bdev_null.so 00:02:07.795 LIB libspdk_bdev_iscsi.a 00:02:07.795 LIB libspdk_bdev_malloc.a 00:02:07.795 SO libspdk_bdev_zone_block.so.6.0 00:02:07.795 SYMLINK libspdk_bdev_passthru.so 00:02:07.795 SYMLINK libspdk_bdev_ftl.so 00:02:07.795 SO libspdk_bdev_delay.so.6.0 00:02:07.795 SO libspdk_bdev_malloc.so.6.0 00:02:07.795 SO libspdk_bdev_iscsi.so.6.0 00:02:07.795 SYMLINK libspdk_bdev_aio.so 00:02:07.795 SYMLINK libspdk_bdev_zone_block.so 00:02:08.056 LIB libspdk_bdev_lvol.a 00:02:08.056 SYMLINK libspdk_bdev_delay.so 00:02:08.056 LIB libspdk_bdev_virtio.a 00:02:08.056 SYMLINK libspdk_bdev_malloc.so 00:02:08.056 SYMLINK libspdk_bdev_iscsi.so 00:02:08.056 SO libspdk_bdev_lvol.so.6.0 00:02:08.056 SO libspdk_bdev_virtio.so.6.0 00:02:08.056 SYMLINK libspdk_bdev_lvol.so 00:02:08.056 SYMLINK libspdk_bdev_virtio.so 00:02:08.317 LIB libspdk_bdev_raid.a 00:02:08.317 SO libspdk_bdev_raid.so.6.0 00:02:08.578 SYMLINK libspdk_bdev_raid.so 00:02:09.521 LIB libspdk_bdev_nvme.a 00:02:09.782 SO libspdk_bdev_nvme.so.7.1 00:02:09.782 SYMLINK libspdk_bdev_nvme.so 00:02:10.726 CC module/event/subsystems/iobuf/iobuf.o 00:02:10.726 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:10.726 CC module/event/subsystems/vmd/vmd.o 00:02:10.726 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:10.726 CC module/event/subsystems/keyring/keyring.o 00:02:10.726 CC module/event/subsystems/sock/sock.o 00:02:10.726 CC module/event/subsystems/scheduler/scheduler.o 00:02:10.726 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:10.726 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:10.726 CC module/event/subsystems/fsdev/fsdev.o 00:02:10.726 LIB libspdk_event_keyring.a 00:02:10.726 LIB libspdk_event_vmd.a 00:02:10.726 LIB libspdk_event_iobuf.a 00:02:10.726 LIB libspdk_event_vhost_blk.a 00:02:10.726 LIB libspdk_event_vfu_tgt.a 00:02:10.726 LIB libspdk_event_fsdev.a 00:02:10.726 LIB libspdk_event_scheduler.a 00:02:10.726 LIB libspdk_event_sock.a 00:02:10.726 SO libspdk_event_keyring.so.1.0 00:02:10.726 SO libspdk_event_vhost_blk.so.3.0 00:02:10.726 SO libspdk_event_vmd.so.6.0 00:02:10.726 SO libspdk_event_iobuf.so.3.0 00:02:10.726 SO libspdk_event_vfu_tgt.so.3.0 00:02:10.726 SO libspdk_event_scheduler.so.4.0 00:02:10.726 SO libspdk_event_fsdev.so.1.0 00:02:10.726 SO libspdk_event_sock.so.5.0 00:02:10.726 SYMLINK libspdk_event_keyring.so 00:02:10.726 SYMLINK libspdk_event_vfu_tgt.so 00:02:10.726 SYMLINK libspdk_event_vhost_blk.so 00:02:10.726 SYMLINK libspdk_event_fsdev.so 00:02:10.726 SYMLINK libspdk_event_iobuf.so 00:02:10.726 SYMLINK libspdk_event_scheduler.so 00:02:10.726 SYMLINK libspdk_event_vmd.so 00:02:10.726 SYMLINK libspdk_event_sock.so 00:02:11.299 CC module/event/subsystems/accel/accel.o 00:02:11.299 LIB libspdk_event_accel.a 00:02:11.299 SO libspdk_event_accel.so.6.0 00:02:11.299 SYMLINK libspdk_event_accel.so 00:02:11.870 CC module/event/subsystems/bdev/bdev.o 00:02:11.870 LIB libspdk_event_bdev.a 00:02:11.870 SO libspdk_event_bdev.so.6.0 00:02:12.131 SYMLINK libspdk_event_bdev.so 00:02:12.393 CC module/event/subsystems/scsi/scsi.o 00:02:12.393 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:12.393 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:12.393 CC module/event/subsystems/nbd/nbd.o 00:02:12.393 CC module/event/subsystems/ublk/ublk.o 00:02:12.654 LIB libspdk_event_scsi.a 00:02:12.655 LIB libspdk_event_nbd.a 00:02:12.655 LIB libspdk_event_ublk.a 00:02:12.655 SO libspdk_event_scsi.so.6.0 00:02:12.655 SO libspdk_event_nbd.so.6.0 00:02:12.655 SO libspdk_event_ublk.so.3.0 00:02:12.655 SYMLINK libspdk_event_scsi.so 00:02:12.655 LIB libspdk_event_nvmf.a 00:02:12.655 SYMLINK libspdk_event_nbd.so 00:02:12.655 SYMLINK libspdk_event_ublk.so 00:02:12.655 SO libspdk_event_nvmf.so.6.0 00:02:12.915 SYMLINK libspdk_event_nvmf.so 00:02:12.915 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:12.915 CC module/event/subsystems/iscsi/iscsi.o 00:02:13.177 LIB libspdk_event_vhost_scsi.a 00:02:13.177 LIB libspdk_event_iscsi.a 00:02:13.177 SO libspdk_event_vhost_scsi.so.3.0 00:02:13.177 SO libspdk_event_iscsi.so.6.0 00:02:13.177 SYMLINK libspdk_event_vhost_scsi.so 00:02:13.437 SYMLINK libspdk_event_iscsi.so 00:02:13.437 SO libspdk.so.6.0 00:02:13.437 SYMLINK libspdk.so 00:02:14.011 CC app/spdk_nvme_identify/identify.o 00:02:14.011 CC app/spdk_top/spdk_top.o 00:02:14.011 CC app/trace_record/trace_record.o 00:02:14.011 CXX app/trace/trace.o 00:02:14.011 TEST_HEADER include/spdk/accel.h 00:02:14.011 CC test/rpc_client/rpc_client_test.o 00:02:14.011 TEST_HEADER include/spdk/accel_module.h 00:02:14.011 CC app/spdk_lspci/spdk_lspci.o 00:02:14.011 TEST_HEADER include/spdk/ae4dma.h 00:02:14.011 TEST_HEADER include/spdk/ae4dma_spec.h 00:02:14.011 TEST_HEADER include/spdk/assert.h 00:02:14.011 TEST_HEADER include/spdk/barrier.h 00:02:14.011 TEST_HEADER include/spdk/base64.h 00:02:14.011 CC app/spdk_nvme_discover/discovery_aer.o 00:02:14.011 TEST_HEADER include/spdk/bdev.h 00:02:14.011 TEST_HEADER include/spdk/bdev_module.h 00:02:14.011 TEST_HEADER include/spdk/bdev_zone.h 00:02:14.011 TEST_HEADER include/spdk/bit_array.h 00:02:14.011 TEST_HEADER include/spdk/bit_pool.h 00:02:14.011 CC app/spdk_nvme_perf/perf.o 00:02:14.011 TEST_HEADER include/spdk/blob_bdev.h 00:02:14.011 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:14.011 TEST_HEADER include/spdk/blob.h 00:02:14.011 TEST_HEADER include/spdk/blobfs.h 00:02:14.011 TEST_HEADER include/spdk/conf.h 00:02:14.011 TEST_HEADER include/spdk/cpuset.h 00:02:14.011 TEST_HEADER include/spdk/config.h 00:02:14.011 TEST_HEADER include/spdk/crc16.h 00:02:14.011 TEST_HEADER include/spdk/crc32.h 00:02:14.011 TEST_HEADER include/spdk/dif.h 00:02:14.011 TEST_HEADER include/spdk/crc64.h 00:02:14.011 TEST_HEADER include/spdk/dma.h 00:02:14.011 TEST_HEADER include/spdk/endian.h 00:02:14.011 TEST_HEADER include/spdk/env.h 00:02:14.011 TEST_HEADER include/spdk/env_dpdk.h 00:02:14.011 TEST_HEADER include/spdk/event.h 00:02:14.011 TEST_HEADER include/spdk/fd_group.h 00:02:14.011 TEST_HEADER include/spdk/file.h 00:02:14.011 TEST_HEADER include/spdk/fd.h 00:02:14.011 TEST_HEADER include/spdk/fsdev.h 00:02:14.011 TEST_HEADER include/spdk/fsdev_module.h 00:02:14.011 TEST_HEADER include/spdk/ftl.h 00:02:14.011 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:14.011 TEST_HEADER include/spdk/gpt_spec.h 00:02:14.011 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:14.011 TEST_HEADER include/spdk/hexlify.h 00:02:14.011 TEST_HEADER include/spdk/histogram_data.h 00:02:14.011 TEST_HEADER include/spdk/idxd.h 00:02:14.011 TEST_HEADER include/spdk/idxd_spec.h 00:02:14.011 TEST_HEADER include/spdk/ioat.h 00:02:14.011 CC app/nvmf_tgt/nvmf_main.o 00:02:14.011 TEST_HEADER include/spdk/init.h 00:02:14.011 TEST_HEADER include/spdk/iscsi_spec.h 00:02:14.011 CC app/spdk_dd/spdk_dd.o 00:02:14.011 TEST_HEADER include/spdk/ioat_spec.h 00:02:14.011 TEST_HEADER include/spdk/json.h 00:02:14.011 TEST_HEADER include/spdk/jsonrpc.h 00:02:14.011 TEST_HEADER include/spdk/keyring.h 00:02:14.011 CC app/iscsi_tgt/iscsi_tgt.o 00:02:14.011 TEST_HEADER include/spdk/keyring_module.h 00:02:14.011 TEST_HEADER include/spdk/likely.h 00:02:14.011 TEST_HEADER include/spdk/lvol.h 00:02:14.011 TEST_HEADER include/spdk/log.h 00:02:14.011 TEST_HEADER include/spdk/memory.h 00:02:14.011 TEST_HEADER include/spdk/md5.h 00:02:14.011 TEST_HEADER include/spdk/nbd.h 00:02:14.011 TEST_HEADER include/spdk/mmio.h 00:02:14.011 TEST_HEADER include/spdk/notify.h 00:02:14.011 TEST_HEADER include/spdk/net.h 00:02:14.011 TEST_HEADER include/spdk/nvme_intel.h 00:02:14.011 TEST_HEADER include/spdk/nvme.h 00:02:14.011 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:14.011 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:14.011 TEST_HEADER include/spdk/nvme_spec.h 00:02:14.011 TEST_HEADER include/spdk/nvme_zns.h 00:02:14.011 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:14.011 CC app/spdk_tgt/spdk_tgt.o 00:02:14.011 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:14.011 TEST_HEADER include/spdk/nvmf.h 00:02:14.011 TEST_HEADER include/spdk/nvmf_transport.h 00:02:14.011 TEST_HEADER include/spdk/nvmf_spec.h 00:02:14.011 TEST_HEADER include/spdk/opal_spec.h 00:02:14.011 TEST_HEADER include/spdk/opal.h 00:02:14.011 TEST_HEADER include/spdk/pci_ids.h 00:02:14.011 TEST_HEADER include/spdk/pipe.h 00:02:14.011 TEST_HEADER include/spdk/queue.h 00:02:14.011 TEST_HEADER include/spdk/reduce.h 00:02:14.011 TEST_HEADER include/spdk/rpc.h 00:02:14.011 TEST_HEADER include/spdk/scheduler.h 00:02:14.011 TEST_HEADER include/spdk/scsi.h 00:02:14.011 TEST_HEADER include/spdk/scsi_spec.h 00:02:14.011 TEST_HEADER include/spdk/stdinc.h 00:02:14.011 TEST_HEADER include/spdk/sock.h 00:02:14.011 TEST_HEADER include/spdk/string.h 00:02:14.011 TEST_HEADER include/spdk/thread.h 00:02:14.011 TEST_HEADER include/spdk/trace.h 00:02:14.011 TEST_HEADER include/spdk/trace_parser.h 00:02:14.011 TEST_HEADER include/spdk/tree.h 00:02:14.011 TEST_HEADER include/spdk/ublk.h 00:02:14.011 TEST_HEADER include/spdk/util.h 00:02:14.011 TEST_HEADER include/spdk/uuid.h 00:02:14.011 TEST_HEADER include/spdk/version.h 00:02:14.011 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:14.011 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:14.011 TEST_HEADER include/spdk/vhost.h 00:02:14.011 TEST_HEADER include/spdk/vmd.h 00:02:14.011 TEST_HEADER include/spdk/xor.h 00:02:14.011 TEST_HEADER include/spdk/zipf.h 00:02:14.011 CXX test/cpp_headers/accel_module.o 00:02:14.011 CXX test/cpp_headers/accel.o 00:02:14.011 CXX test/cpp_headers/ae4dma.o 00:02:14.011 CXX test/cpp_headers/ae4dma_spec.o 00:02:14.011 CXX test/cpp_headers/assert.o 00:02:14.011 CXX test/cpp_headers/barrier.o 00:02:14.011 CXX test/cpp_headers/base64.o 00:02:14.011 CXX test/cpp_headers/bdev.o 00:02:14.011 CXX test/cpp_headers/bdev_module.o 00:02:14.011 CXX test/cpp_headers/bdev_zone.o 00:02:14.011 CXX test/cpp_headers/bit_array.o 00:02:14.011 CXX test/cpp_headers/bit_pool.o 00:02:14.011 CXX test/cpp_headers/blob_bdev.o 00:02:14.011 CXX test/cpp_headers/blobfs_bdev.o 00:02:14.011 CXX test/cpp_headers/blobfs.o 00:02:14.011 CXX test/cpp_headers/blob.o 00:02:14.011 CXX test/cpp_headers/config.o 00:02:14.011 CXX test/cpp_headers/conf.o 00:02:14.011 CXX test/cpp_headers/crc16.o 00:02:14.011 CXX test/cpp_headers/cpuset.o 00:02:14.011 CXX test/cpp_headers/crc32.o 00:02:14.011 CXX test/cpp_headers/crc64.o 00:02:14.011 CXX test/cpp_headers/dif.o 00:02:14.011 CXX test/cpp_headers/dma.o 00:02:14.011 CXX test/cpp_headers/endian.o 00:02:14.011 CXX test/cpp_headers/env_dpdk.o 00:02:14.011 CXX test/cpp_headers/env.o 00:02:14.012 CXX test/cpp_headers/fd_group.o 00:02:14.012 CXX test/cpp_headers/event.o 00:02:14.012 CXX test/cpp_headers/fd.o 00:02:14.012 CXX test/cpp_headers/file.o 00:02:14.012 CXX test/cpp_headers/fsdev.o 00:02:14.012 CXX test/cpp_headers/ftl.o 00:02:14.012 CXX test/cpp_headers/fsdev_module.o 00:02:14.012 CXX test/cpp_headers/fuse_dispatcher.o 00:02:14.012 CXX test/cpp_headers/histogram_data.o 00:02:14.012 CXX test/cpp_headers/hexlify.o 00:02:14.012 CXX test/cpp_headers/gpt_spec.o 00:02:14.012 CXX test/cpp_headers/idxd_spec.o 00:02:14.012 CXX test/cpp_headers/idxd.o 00:02:14.280 CXX test/cpp_headers/init.o 00:02:14.280 CXX test/cpp_headers/ioat_spec.o 00:02:14.280 CXX test/cpp_headers/iscsi_spec.o 00:02:14.280 CXX test/cpp_headers/ioat.o 00:02:14.280 CXX test/cpp_headers/json.o 00:02:14.280 CXX test/cpp_headers/jsonrpc.o 00:02:14.280 CXX test/cpp_headers/keyring.o 00:02:14.280 CXX test/cpp_headers/keyring_module.o 00:02:14.280 CXX test/cpp_headers/md5.o 00:02:14.280 CXX test/cpp_headers/likely.o 00:02:14.280 CXX test/cpp_headers/lvol.o 00:02:14.280 CXX test/cpp_headers/log.o 00:02:14.280 CXX test/cpp_headers/mmio.o 00:02:14.280 CXX test/cpp_headers/memory.o 00:02:14.280 CC test/app/jsoncat/jsoncat.o 00:02:14.280 CXX test/cpp_headers/nbd.o 00:02:14.280 CXX test/cpp_headers/notify.o 00:02:14.280 CC test/env/memory/memory_ut.o 00:02:14.280 CXX test/cpp_headers/nvme_intel.o 00:02:14.280 CXX test/cpp_headers/nvme.o 00:02:14.280 CXX test/cpp_headers/net.o 00:02:14.280 CC test/app/histogram_perf/histogram_perf.o 00:02:14.280 CXX test/cpp_headers/nvme_ocssd.o 00:02:14.280 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:14.280 CC test/app/stub/stub.o 00:02:14.280 CXX test/cpp_headers/nvme_zns.o 00:02:14.280 CXX test/cpp_headers/nvmf_cmd.o 00:02:14.280 CXX test/cpp_headers/nvme_spec.o 00:02:14.280 CXX test/cpp_headers/nvmf.o 00:02:14.280 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:14.280 CXX test/cpp_headers/nvmf_spec.o 00:02:14.280 CXX test/cpp_headers/nvmf_transport.o 00:02:14.280 CC examples/util/zipf/zipf.o 00:02:14.280 CC test/env/vtophys/vtophys.o 00:02:14.280 CXX test/cpp_headers/opal.o 00:02:14.280 CXX test/cpp_headers/opal_spec.o 00:02:14.280 CC examples/ioat/perf/perf.o 00:02:14.280 CXX test/cpp_headers/queue.o 00:02:14.280 CC test/env/pci/pci_ut.o 00:02:14.280 CXX test/cpp_headers/pipe.o 00:02:14.280 CXX test/cpp_headers/pci_ids.o 00:02:14.280 CC test/thread/poller_perf/poller_perf.o 00:02:14.280 CXX test/cpp_headers/reduce.o 00:02:14.280 CXX test/cpp_headers/rpc.o 00:02:14.280 CXX test/cpp_headers/scheduler.o 00:02:14.280 CXX test/cpp_headers/scsi_spec.o 00:02:14.280 CXX test/cpp_headers/stdinc.o 00:02:14.280 CXX test/cpp_headers/sock.o 00:02:14.280 CXX test/cpp_headers/scsi.o 00:02:14.280 CXX test/cpp_headers/string.o 00:02:14.280 CXX test/cpp_headers/trace.o 00:02:14.280 CXX test/cpp_headers/thread.o 00:02:14.280 CC examples/ioat/verify/verify.o 00:02:14.280 CXX test/cpp_headers/trace_parser.o 00:02:14.280 CXX test/cpp_headers/tree.o 00:02:14.280 LINK spdk_lspci 00:02:14.280 CXX test/cpp_headers/ublk.o 00:02:14.280 CXX test/cpp_headers/util.o 00:02:14.280 CC test/dma/test_dma/test_dma.o 00:02:14.280 CXX test/cpp_headers/uuid.o 00:02:14.280 CXX test/cpp_headers/vfio_user_pci.o 00:02:14.280 CXX test/cpp_headers/version.o 00:02:14.280 CXX test/cpp_headers/vhost.o 00:02:14.280 CXX test/cpp_headers/vfio_user_spec.o 00:02:14.280 CXX test/cpp_headers/vmd.o 00:02:14.280 CXX test/cpp_headers/xor.o 00:02:14.280 CC app/fio/nvme/fio_plugin.o 00:02:14.280 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:14.280 CXX test/cpp_headers/zipf.o 00:02:14.280 CC test/app/bdev_svc/bdev_svc.o 00:02:14.280 CC app/fio/bdev/fio_plugin.o 00:02:14.280 LINK rpc_client_test 00:02:14.280 LINK spdk_nvme_discover 00:02:14.550 LINK interrupt_tgt 00:02:14.550 LINK nvmf_tgt 00:02:14.550 LINK iscsi_tgt 00:02:14.550 LINK spdk_trace_record 00:02:14.812 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:14.812 CC test/env/mem_callbacks/mem_callbacks.o 00:02:14.812 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:14.812 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:14.812 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:14.812 LINK spdk_tgt 00:02:15.072 LINK env_dpdk_post_init 00:02:15.072 LINK stub 00:02:15.072 LINK poller_perf 00:02:15.072 LINK ioat_perf 00:02:15.072 LINK jsoncat 00:02:15.072 LINK spdk_dd 00:02:15.072 LINK vtophys 00:02:15.072 LINK histogram_perf 00:02:15.072 LINK zipf 00:02:15.333 LINK bdev_svc 00:02:15.333 LINK verify 00:02:15.595 LINK spdk_trace 00:02:15.595 LINK nvme_fuzz 00:02:15.595 LINK vhost_fuzz 00:02:15.595 CC test/event/reactor_perf/reactor_perf.o 00:02:15.595 CC test/event/event_perf/event_perf.o 00:02:15.595 CC test/event/reactor/reactor.o 00:02:15.595 LINK pci_ut 00:02:15.595 CC test/event/app_repeat/app_repeat.o 00:02:15.595 LINK test_dma 00:02:15.595 LINK spdk_nvme 00:02:15.595 CC test/event/scheduler/scheduler.o 00:02:15.595 LINK spdk_bdev 00:02:15.857 LINK mem_callbacks 00:02:15.857 CC examples/sock/hello_world/hello_sock.o 00:02:15.857 CC examples/vmd/led/led.o 00:02:15.857 CC examples/idxd/perf/perf.o 00:02:15.857 CC examples/vmd/lsvmd/lsvmd.o 00:02:15.857 LINK event_perf 00:02:15.857 LINK reactor_perf 00:02:15.857 LINK reactor 00:02:15.857 LINK spdk_nvme_perf 00:02:15.857 LINK spdk_top 00:02:15.857 CC examples/thread/thread/thread_ex.o 00:02:15.857 LINK app_repeat 00:02:15.857 LINK spdk_nvme_identify 00:02:15.857 CC app/vhost/vhost.o 00:02:15.857 LINK scheduler 00:02:15.857 LINK led 00:02:15.857 LINK lsvmd 00:02:16.118 LINK hello_sock 00:02:16.118 LINK idxd_perf 00:02:16.118 LINK thread 00:02:16.118 LINK vhost 00:02:16.379 LINK memory_ut 00:02:16.379 CC test/nvme/overhead/overhead.o 00:02:16.379 CC test/nvme/connect_stress/connect_stress.o 00:02:16.379 CC test/nvme/cuse/cuse.o 00:02:16.379 CC test/nvme/aer/aer.o 00:02:16.379 CC test/nvme/reserve/reserve.o 00:02:16.379 CC test/nvme/e2edp/nvme_dp.o 00:02:16.379 CC test/nvme/sgl/sgl.o 00:02:16.379 CC test/nvme/reset/reset.o 00:02:16.379 CC test/nvme/simple_copy/simple_copy.o 00:02:16.379 CC test/nvme/startup/startup.o 00:02:16.379 CC test/nvme/err_injection/err_injection.o 00:02:16.379 CC test/nvme/fdp/fdp.o 00:02:16.379 CC test/nvme/fused_ordering/fused_ordering.o 00:02:16.379 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:16.379 CC test/nvme/compliance/nvme_compliance.o 00:02:16.379 CC test/nvme/boot_partition/boot_partition.o 00:02:16.379 CC test/blobfs/mkfs/mkfs.o 00:02:16.379 CC test/accel/dif/dif.o 00:02:16.379 CC test/lvol/esnap/esnap.o 00:02:16.640 LINK startup 00:02:16.640 LINK connect_stress 00:02:16.640 LINK doorbell_aers 00:02:16.640 LINK boot_partition 00:02:16.640 LINK err_injection 00:02:16.640 LINK fused_ordering 00:02:16.640 LINK reserve 00:02:16.640 LINK simple_copy 00:02:16.640 LINK mkfs 00:02:16.640 CC examples/nvme/arbitration/arbitration.o 00:02:16.640 CC examples/nvme/hello_world/hello_world.o 00:02:16.640 LINK overhead 00:02:16.640 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:16.640 LINK reset 00:02:16.640 CC examples/nvme/reconnect/reconnect.o 00:02:16.640 LINK sgl 00:02:16.640 CC examples/nvme/abort/abort.o 00:02:16.640 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:16.640 CC examples/nvme/hotplug/hotplug.o 00:02:16.640 LINK aer 00:02:16.640 LINK nvme_dp 00:02:16.640 LINK iscsi_fuzz 00:02:16.640 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:16.640 LINK nvme_compliance 00:02:16.640 LINK fdp 00:02:16.900 CC examples/accel/perf/accel_perf.o 00:02:16.900 CC examples/blob/cli/blobcli.o 00:02:16.900 CC examples/blob/hello_world/hello_blob.o 00:02:16.900 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:16.900 LINK pmr_persistence 00:02:16.900 LINK cmb_copy 00:02:16.900 LINK hello_world 00:02:16.900 LINK hotplug 00:02:16.900 LINK dif 00:02:16.900 LINK arbitration 00:02:16.900 LINK reconnect 00:02:16.900 LINK abort 00:02:17.161 LINK hello_blob 00:02:17.161 LINK nvme_manage 00:02:17.161 LINK hello_fsdev 00:02:17.161 LINK accel_perf 00:02:17.422 LINK blobcli 00:02:17.422 LINK cuse 00:02:17.685 CC test/bdev/bdevio/bdevio.o 00:02:17.947 CC examples/bdev/hello_world/hello_bdev.o 00:02:17.947 CC examples/bdev/bdevperf/bdevperf.o 00:02:17.947 LINK bdevio 00:02:18.207 LINK hello_bdev 00:02:18.779 LINK bdevperf 00:02:19.352 CC examples/nvmf/nvmf/nvmf.o 00:02:19.613 LINK nvmf 00:02:21.000 LINK esnap 00:02:21.262 00:02:21.262 real 0m55.981s 00:02:21.262 user 8m7.936s 00:02:21.262 sys 5m27.950s 00:02:21.262 13:49:19 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:21.262 13:49:19 make -- common/autotest_common.sh@10 -- $ set +x 00:02:21.262 ************************************ 00:02:21.262 END TEST make 00:02:21.262 ************************************ 00:02:21.262 13:49:19 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:21.262 13:49:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:21.262 13:49:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:21.262 13:49:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.262 13:49:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:21.262 13:49:19 -- pm/common@44 -- $ pid=695704 00:02:21.262 13:49:19 -- pm/common@50 -- $ kill -TERM 695704 00:02:21.262 13:49:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.262 13:49:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:21.262 13:49:19 -- pm/common@44 -- $ pid=695705 00:02:21.262 13:49:19 -- pm/common@50 -- $ kill -TERM 695705 00:02:21.262 13:49:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.262 13:49:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:21.262 13:49:19 -- pm/common@44 -- $ pid=695707 00:02:21.262 13:49:19 -- pm/common@50 -- $ kill -TERM 695707 00:02:21.262 13:49:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.262 13:49:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:21.262 13:49:19 -- pm/common@44 -- $ pid=695730 00:02:21.262 13:49:19 -- pm/common@50 -- $ sudo -E kill -TERM 695730 00:02:21.262 13:49:19 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:21.262 13:49:19 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:21.262 13:49:19 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:21.262 13:49:19 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:21.262 13:49:19 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:21.526 13:49:19 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:21.526 13:49:19 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:21.526 13:49:19 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:21.526 13:49:19 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:21.526 13:49:19 -- scripts/common.sh@336 -- # IFS=.-: 00:02:21.526 13:49:19 -- scripts/common.sh@336 -- # read -ra ver1 00:02:21.526 13:49:19 -- scripts/common.sh@337 -- # IFS=.-: 00:02:21.527 13:49:19 -- scripts/common.sh@337 -- # read -ra ver2 00:02:21.527 13:49:19 -- scripts/common.sh@338 -- # local 'op=<' 00:02:21.527 13:49:19 -- scripts/common.sh@340 -- # ver1_l=2 00:02:21.527 13:49:19 -- scripts/common.sh@341 -- # ver2_l=1 00:02:21.527 13:49:19 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:21.527 13:49:19 -- scripts/common.sh@344 -- # case "$op" in 00:02:21.527 13:49:19 -- scripts/common.sh@345 -- # : 1 00:02:21.527 13:49:19 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:21.527 13:49:19 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:21.527 13:49:19 -- scripts/common.sh@365 -- # decimal 1 00:02:21.527 13:49:19 -- scripts/common.sh@353 -- # local d=1 00:02:21.527 13:49:19 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:21.527 13:49:19 -- scripts/common.sh@355 -- # echo 1 00:02:21.527 13:49:19 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:21.527 13:49:19 -- scripts/common.sh@366 -- # decimal 2 00:02:21.527 13:49:19 -- scripts/common.sh@353 -- # local d=2 00:02:21.527 13:49:19 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:21.527 13:49:19 -- scripts/common.sh@355 -- # echo 2 00:02:21.527 13:49:19 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:21.527 13:49:19 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:21.527 13:49:19 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:21.527 13:49:19 -- scripts/common.sh@368 -- # return 0 00:02:21.527 13:49:19 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:21.527 13:49:19 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:21.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:21.527 --rc genhtml_branch_coverage=1 00:02:21.527 --rc genhtml_function_coverage=1 00:02:21.527 --rc genhtml_legend=1 00:02:21.527 --rc geninfo_all_blocks=1 00:02:21.527 --rc geninfo_unexecuted_blocks=1 00:02:21.527 00:02:21.527 ' 00:02:21.527 13:49:19 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:21.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:21.527 --rc genhtml_branch_coverage=1 00:02:21.527 --rc genhtml_function_coverage=1 00:02:21.527 --rc genhtml_legend=1 00:02:21.527 --rc geninfo_all_blocks=1 00:02:21.527 --rc geninfo_unexecuted_blocks=1 00:02:21.527 00:02:21.527 ' 00:02:21.527 13:49:19 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:21.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:21.527 --rc genhtml_branch_coverage=1 00:02:21.527 --rc genhtml_function_coverage=1 00:02:21.527 --rc genhtml_legend=1 00:02:21.527 --rc geninfo_all_blocks=1 00:02:21.527 --rc geninfo_unexecuted_blocks=1 00:02:21.527 00:02:21.527 ' 00:02:21.527 13:49:19 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:21.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:21.527 --rc genhtml_branch_coverage=1 00:02:21.527 --rc genhtml_function_coverage=1 00:02:21.527 --rc genhtml_legend=1 00:02:21.527 --rc geninfo_all_blocks=1 00:02:21.527 --rc geninfo_unexecuted_blocks=1 00:02:21.527 00:02:21.527 ' 00:02:21.527 13:49:19 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:21.527 13:49:19 -- nvmf/common.sh@7 -- # uname -s 00:02:21.527 13:49:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:21.527 13:49:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:21.527 13:49:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:21.527 13:49:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:21.527 13:49:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:21.527 13:49:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:21.527 13:49:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:21.527 13:49:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:21.527 13:49:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:21.527 13:49:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:21.527 13:49:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:21.527 13:49:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:21.527 13:49:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:21.527 13:49:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:21.527 13:49:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:21.527 13:49:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:21.527 13:49:19 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:21.527 13:49:19 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:21.527 13:49:19 -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:21.527 13:49:19 -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:21.527 13:49:19 -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:21.527 13:49:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.527 13:49:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.527 13:49:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.527 13:49:19 -- paths/export.sh@5 -- # export PATH 00:02:21.527 13:49:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.527 13:49:19 -- nvmf/common.sh@51 -- # : 0 00:02:21.527 13:49:19 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:21.528 13:49:19 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:21.528 13:49:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:21.528 13:49:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:21.528 13:49:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:21.528 13:49:19 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:21.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:21.528 13:49:19 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:21.528 13:49:19 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:21.528 13:49:19 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:21.528 13:49:19 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:21.528 13:49:19 -- spdk/autotest.sh@32 -- # uname -s 00:02:21.528 13:49:19 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:21.528 13:49:19 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:21.528 13:49:19 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:21.528 13:49:19 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:21.528 13:49:19 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:21.528 13:49:19 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:21.528 13:49:19 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:21.528 13:49:19 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:21.528 13:49:19 -- spdk/autotest.sh@48 -- # udevadm_pid=762002 00:02:21.528 13:49:19 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:21.528 13:49:19 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:21.528 13:49:19 -- pm/common@17 -- # local monitor 00:02:21.528 13:49:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.528 13:49:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.528 13:49:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.528 13:49:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.528 13:49:19 -- pm/common@21 -- # date +%s 00:02:21.528 13:49:19 -- pm/common@25 -- # sleep 1 00:02:21.528 13:49:19 -- pm/common@21 -- # date +%s 00:02:21.528 13:49:19 -- pm/common@21 -- # date +%s 00:02:21.528 13:49:19 -- pm/common@21 -- # date +%s 00:02:21.528 13:49:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730292559 00:02:21.528 13:49:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730292559 00:02:21.528 13:49:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730292559 00:02:21.528 13:49:19 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730292559 00:02:21.528 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730292559_collect-cpu-load.pm.log 00:02:21.528 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730292559_collect-vmstat.pm.log 00:02:21.528 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730292559_collect-cpu-temp.pm.log 00:02:21.528 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730292559_collect-bmc-pm.bmc.pm.log 00:02:22.475 13:49:20 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:22.475 13:49:20 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:22.475 13:49:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:22.475 13:49:20 -- common/autotest_common.sh@10 -- # set +x 00:02:22.475 13:49:20 -- spdk/autotest.sh@59 -- # create_test_list 00:02:22.475 13:49:20 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:22.475 13:49:20 -- common/autotest_common.sh@10 -- # set +x 00:02:22.475 13:49:20 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:22.475 13:49:20 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:22.475 13:49:20 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:22.475 13:49:20 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:22.475 13:49:20 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:22.475 13:49:20 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:22.475 13:49:20 -- common/autotest_common.sh@1457 -- # uname 00:02:22.475 13:49:20 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:22.475 13:49:20 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:22.475 13:49:20 -- common/autotest_common.sh@1477 -- # uname 00:02:22.475 13:49:20 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:22.475 13:49:20 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:22.475 13:49:20 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:22.735 lcov: LCOV version 1.15 00:02:22.735 13:49:20 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:44.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:44.704 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:52.848 13:49:50 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:52.848 13:49:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:52.848 13:49:50 -- common/autotest_common.sh@10 -- # set +x 00:02:52.848 13:49:50 -- spdk/autotest.sh@78 -- # rm -f 00:02:52.848 13:49:50 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:56.157 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:56.157 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:56.157 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:56.419 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:56.419 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:56.419 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:56.419 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:56.419 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:56.419 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:56.419 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:56.419 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:56.419 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:56.419 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:56.419 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:56.680 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:56.680 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:56.680 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:56.941 13:49:55 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:56.941 13:49:55 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:02:56.941 13:49:55 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:02:56.941 13:49:55 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:02:56.941 13:49:55 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:02:56.941 13:49:55 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:02:56.941 13:49:55 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:02:56.941 13:49:55 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:56.941 13:49:55 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:56.941 13:49:55 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:56.941 13:49:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:56.941 13:49:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:56.941 13:49:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:56.941 13:49:55 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:56.941 13:49:55 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:56.941 No valid GPT data, bailing 00:02:56.941 13:49:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:56.941 13:49:55 -- scripts/common.sh@394 -- # pt= 00:02:56.941 13:49:55 -- scripts/common.sh@395 -- # return 1 00:02:56.941 13:49:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:56.941 1+0 records in 00:02:56.941 1+0 records out 00:02:56.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00457866 s, 229 MB/s 00:02:56.941 13:49:55 -- spdk/autotest.sh@105 -- # sync 00:02:56.941 13:49:55 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:56.941 13:49:55 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:56.941 13:49:55 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:06.945 13:50:03 -- spdk/autotest.sh@111 -- # uname -s 00:03:06.945 13:50:03 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:06.945 13:50:03 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:06.945 13:50:03 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:08.862 Hugepages 00:03:08.862 node hugesize free / total 00:03:08.862 node0 1048576kB 0 / 0 00:03:08.862 node0 2048kB 0 / 0 00:03:08.862 node1 1048576kB 0 / 0 00:03:08.862 node1 2048kB 0 / 0 00:03:08.862 00:03:08.862 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:08.862 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:08.862 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:08.862 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:08.862 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:08.862 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:08.862 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:08.862 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:08.862 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:09.124 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:09.124 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:09.124 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:09.124 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:09.124 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:09.124 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:09.124 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:09.124 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:09.124 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:09.124 13:50:07 -- spdk/autotest.sh@117 -- # uname -s 00:03:09.124 13:50:07 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:09.124 13:50:07 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:09.124 13:50:07 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:13.341 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:13.341 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:13.341 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:13.341 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:13.341 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:13.341 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:13.341 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:13.341 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:13.341 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:13.341 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:13.341 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:13.341 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:13.341 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:13.341 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:13.341 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:13.341 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:14.728 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:14.990 13:50:13 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:15.938 13:50:14 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:15.938 13:50:14 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:15.938 13:50:14 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:15.938 13:50:14 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:15.938 13:50:14 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:15.938 13:50:14 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:15.938 13:50:14 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:15.938 13:50:14 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:15.938 13:50:14 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:15.938 13:50:14 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:15.938 13:50:14 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:15.938 13:50:14 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:19.244 Waiting for block devices as requested 00:03:19.506 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:19.506 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:19.506 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:19.768 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:19.768 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:19.768 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:19.768 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:20.030 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:20.030 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:20.291 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:20.291 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:20.552 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:20.552 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:20.552 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:20.552 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:20.831 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:20.831 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:21.100 13:50:19 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:21.100 13:50:19 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:21.100 13:50:19 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:21.100 13:50:19 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:03:21.100 13:50:19 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:21.100 13:50:19 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:21.100 13:50:19 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:21.100 13:50:19 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:21.100 13:50:19 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:21.100 13:50:19 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:21.100 13:50:19 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:21.100 13:50:19 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:21.100 13:50:19 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:21.100 13:50:19 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:03:21.100 13:50:19 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:21.100 13:50:19 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:21.100 13:50:19 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:21.100 13:50:19 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:21.100 13:50:19 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:21.100 13:50:19 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:21.100 13:50:19 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:21.100 13:50:19 -- common/autotest_common.sh@1543 -- # continue 00:03:21.100 13:50:19 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:21.100 13:50:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:21.100 13:50:19 -- common/autotest_common.sh@10 -- # set +x 00:03:21.362 13:50:19 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:21.362 13:50:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:21.362 13:50:19 -- common/autotest_common.sh@10 -- # set +x 00:03:21.362 13:50:19 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.668 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:24.668 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:24.668 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:24.668 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:24.668 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:24.668 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:24.668 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:24.668 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:24.668 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:24.668 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:24.930 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:24.930 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:24.930 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:24.930 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:24.930 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:24.930 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:24.930 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:25.191 13:50:23 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:25.191 13:50:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:25.191 13:50:23 -- common/autotest_common.sh@10 -- # set +x 00:03:25.191 13:50:23 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:25.191 13:50:23 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:25.191 13:50:23 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:25.191 13:50:23 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:25.191 13:50:23 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:25.191 13:50:23 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:25.191 13:50:23 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:25.191 13:50:23 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:25.191 13:50:23 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:25.191 13:50:23 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:25.191 13:50:23 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:25.191 13:50:23 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:25.191 13:50:23 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:25.452 13:50:23 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:25.453 13:50:23 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:25.453 13:50:23 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:25.453 13:50:23 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:25.453 13:50:23 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:03:25.453 13:50:23 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:25.453 13:50:23 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:25.453 13:50:23 -- common/autotest_common.sh@1572 -- # return 0 00:03:25.453 13:50:23 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:25.453 13:50:23 -- common/autotest_common.sh@1580 -- # return 0 00:03:25.453 13:50:23 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:25.453 13:50:23 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:25.453 13:50:23 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:25.453 13:50:23 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:25.453 13:50:23 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:25.453 13:50:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:25.453 13:50:23 -- common/autotest_common.sh@10 -- # set +x 00:03:25.453 13:50:23 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:25.453 13:50:23 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:25.453 13:50:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:25.453 13:50:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:25.453 13:50:23 -- common/autotest_common.sh@10 -- # set +x 00:03:25.453 ************************************ 00:03:25.453 START TEST env 00:03:25.453 ************************************ 00:03:25.453 13:50:23 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:25.453 * Looking for test storage... 00:03:25.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:25.453 13:50:23 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:25.453 13:50:23 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:25.453 13:50:23 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:25.714 13:50:23 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:25.714 13:50:23 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:25.714 13:50:23 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:25.714 13:50:23 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:25.714 13:50:23 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:25.714 13:50:23 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:25.714 13:50:23 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:25.714 13:50:23 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:25.714 13:50:23 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:25.714 13:50:23 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:25.714 13:50:23 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:25.714 13:50:23 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:25.714 13:50:23 env -- scripts/common.sh@344 -- # case "$op" in 00:03:25.714 13:50:23 env -- scripts/common.sh@345 -- # : 1 00:03:25.714 13:50:23 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:25.714 13:50:23 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:25.714 13:50:23 env -- scripts/common.sh@365 -- # decimal 1 00:03:25.714 13:50:23 env -- scripts/common.sh@353 -- # local d=1 00:03:25.714 13:50:23 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:25.714 13:50:23 env -- scripts/common.sh@355 -- # echo 1 00:03:25.714 13:50:23 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:25.714 13:50:23 env -- scripts/common.sh@366 -- # decimal 2 00:03:25.714 13:50:23 env -- scripts/common.sh@353 -- # local d=2 00:03:25.714 13:50:23 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:25.714 13:50:23 env -- scripts/common.sh@355 -- # echo 2 00:03:25.714 13:50:23 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:25.714 13:50:23 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:25.714 13:50:23 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:25.714 13:50:23 env -- scripts/common.sh@368 -- # return 0 00:03:25.714 13:50:23 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:25.714 13:50:23 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:25.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.714 --rc genhtml_branch_coverage=1 00:03:25.714 --rc genhtml_function_coverage=1 00:03:25.714 --rc genhtml_legend=1 00:03:25.714 --rc geninfo_all_blocks=1 00:03:25.714 --rc geninfo_unexecuted_blocks=1 00:03:25.714 00:03:25.714 ' 00:03:25.714 13:50:23 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:25.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.714 --rc genhtml_branch_coverage=1 00:03:25.714 --rc genhtml_function_coverage=1 00:03:25.714 --rc genhtml_legend=1 00:03:25.714 --rc geninfo_all_blocks=1 00:03:25.714 --rc geninfo_unexecuted_blocks=1 00:03:25.714 00:03:25.714 ' 00:03:25.714 13:50:23 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:25.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.714 --rc genhtml_branch_coverage=1 00:03:25.714 --rc genhtml_function_coverage=1 00:03:25.714 --rc genhtml_legend=1 00:03:25.714 --rc geninfo_all_blocks=1 00:03:25.714 --rc geninfo_unexecuted_blocks=1 00:03:25.714 00:03:25.714 ' 00:03:25.714 13:50:23 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:25.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.714 --rc genhtml_branch_coverage=1 00:03:25.714 --rc genhtml_function_coverage=1 00:03:25.714 --rc genhtml_legend=1 00:03:25.714 --rc geninfo_all_blocks=1 00:03:25.714 --rc geninfo_unexecuted_blocks=1 00:03:25.714 00:03:25.714 ' 00:03:25.714 13:50:23 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:25.714 13:50:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:25.714 13:50:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:25.714 13:50:23 env -- common/autotest_common.sh@10 -- # set +x 00:03:25.714 ************************************ 00:03:25.714 START TEST env_memory 00:03:25.714 ************************************ 00:03:25.714 13:50:23 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:25.714 00:03:25.714 00:03:25.714 CUnit - A unit testing framework for C - Version 2.1-3 00:03:25.714 http://cunit.sourceforge.net/ 00:03:25.714 00:03:25.714 00:03:25.714 Suite: memory 00:03:25.714 Test: alloc and free memory map ...[2024-10-30 13:50:23.922273] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:25.714 passed 00:03:25.714 Test: mem map translation ...[2024-10-30 13:50:23.947918] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:25.714 [2024-10-30 13:50:23.947945] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:25.714 [2024-10-30 13:50:23.947991] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:25.714 [2024-10-30 13:50:23.948003] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:25.714 passed 00:03:25.714 Test: mem map registration ...[2024-10-30 13:50:24.003144] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:25.714 [2024-10-30 13:50:24.003164] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:25.975 passed 00:03:25.975 Test: mem map adjacent registrations ...passed 00:03:25.975 00:03:25.975 Run Summary: Type Total Ran Passed Failed Inactive 00:03:25.975 suites 1 1 n/a 0 0 00:03:25.975 tests 4 4 4 0 0 00:03:25.975 asserts 152 152 152 0 n/a 00:03:25.975 00:03:25.975 Elapsed time = 0.193 seconds 00:03:25.975 00:03:25.975 real 0m0.208s 00:03:25.975 user 0m0.197s 00:03:25.975 sys 0m0.010s 00:03:25.975 13:50:24 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:25.975 13:50:24 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:25.975 ************************************ 00:03:25.975 END TEST env_memory 00:03:25.975 ************************************ 00:03:25.975 13:50:24 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:25.975 13:50:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:25.975 13:50:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:25.975 13:50:24 env -- common/autotest_common.sh@10 -- # set +x 00:03:25.975 ************************************ 00:03:25.975 START TEST env_vtophys 00:03:25.975 ************************************ 00:03:25.975 13:50:24 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:25.975 EAL: lib.eal log level changed from notice to debug 00:03:25.975 EAL: Detected lcore 0 as core 0 on socket 0 00:03:25.975 EAL: Detected lcore 1 as core 1 on socket 0 00:03:25.975 EAL: Detected lcore 2 as core 2 on socket 0 00:03:25.975 EAL: Detected lcore 3 as core 3 on socket 0 00:03:25.975 EAL: Detected lcore 4 as core 4 on socket 0 00:03:25.975 EAL: Detected lcore 5 as core 5 on socket 0 00:03:25.975 EAL: Detected lcore 6 as core 6 on socket 0 00:03:25.975 EAL: Detected lcore 7 as core 7 on socket 0 00:03:25.975 EAL: Detected lcore 8 as core 8 on socket 0 00:03:25.975 EAL: Detected lcore 9 as core 9 on socket 0 00:03:25.975 EAL: Detected lcore 10 as core 10 on socket 0 00:03:25.975 EAL: Detected lcore 11 as core 11 on socket 0 00:03:25.975 EAL: Detected lcore 12 as core 12 on socket 0 00:03:25.975 EAL: Detected lcore 13 as core 13 on socket 0 00:03:25.975 EAL: Detected lcore 14 as core 14 on socket 0 00:03:25.975 EAL: Detected lcore 15 as core 15 on socket 0 00:03:25.975 EAL: Detected lcore 16 as core 16 on socket 0 00:03:25.975 EAL: Detected lcore 17 as core 17 on socket 0 00:03:25.975 EAL: Detected lcore 18 as core 18 on socket 0 00:03:25.975 EAL: Detected lcore 19 as core 19 on socket 0 00:03:25.975 EAL: Detected lcore 20 as core 20 on socket 0 00:03:25.975 EAL: Detected lcore 21 as core 21 on socket 0 00:03:25.975 EAL: Detected lcore 22 as core 22 on socket 0 00:03:25.975 EAL: Detected lcore 23 as core 23 on socket 0 00:03:25.975 EAL: Detected lcore 24 as core 24 on socket 0 00:03:25.975 EAL: Detected lcore 25 as core 25 on socket 0 00:03:25.975 EAL: Detected lcore 26 as core 26 on socket 0 00:03:25.975 EAL: Detected lcore 27 as core 27 on socket 0 00:03:25.975 EAL: Detected lcore 28 as core 28 on socket 0 00:03:25.975 EAL: Detected lcore 29 as core 29 on socket 0 00:03:25.975 EAL: Detected lcore 30 as core 30 on socket 0 00:03:25.975 EAL: Detected lcore 31 as core 31 on socket 0 00:03:25.975 EAL: Detected lcore 32 as core 32 on socket 0 00:03:25.975 EAL: Detected lcore 33 as core 33 on socket 0 00:03:25.975 EAL: Detected lcore 34 as core 34 on socket 0 00:03:25.975 EAL: Detected lcore 35 as core 35 on socket 0 00:03:25.975 EAL: Detected lcore 36 as core 0 on socket 1 00:03:25.975 EAL: Detected lcore 37 as core 1 on socket 1 00:03:25.975 EAL: Detected lcore 38 as core 2 on socket 1 00:03:25.975 EAL: Detected lcore 39 as core 3 on socket 1 00:03:25.975 EAL: Detected lcore 40 as core 4 on socket 1 00:03:25.975 EAL: Detected lcore 41 as core 5 on socket 1 00:03:25.975 EAL: Detected lcore 42 as core 6 on socket 1 00:03:25.975 EAL: Detected lcore 43 as core 7 on socket 1 00:03:25.975 EAL: Detected lcore 44 as core 8 on socket 1 00:03:25.975 EAL: Detected lcore 45 as core 9 on socket 1 00:03:25.976 EAL: Detected lcore 46 as core 10 on socket 1 00:03:25.976 EAL: Detected lcore 47 as core 11 on socket 1 00:03:25.976 EAL: Detected lcore 48 as core 12 on socket 1 00:03:25.976 EAL: Detected lcore 49 as core 13 on socket 1 00:03:25.976 EAL: Detected lcore 50 as core 14 on socket 1 00:03:25.976 EAL: Detected lcore 51 as core 15 on socket 1 00:03:25.976 EAL: Detected lcore 52 as core 16 on socket 1 00:03:25.976 EAL: Detected lcore 53 as core 17 on socket 1 00:03:25.976 EAL: Detected lcore 54 as core 18 on socket 1 00:03:25.976 EAL: Detected lcore 55 as core 19 on socket 1 00:03:25.976 EAL: Detected lcore 56 as core 20 on socket 1 00:03:25.976 EAL: Detected lcore 57 as core 21 on socket 1 00:03:25.976 EAL: Detected lcore 58 as core 22 on socket 1 00:03:25.976 EAL: Detected lcore 59 as core 23 on socket 1 00:03:25.976 EAL: Detected lcore 60 as core 24 on socket 1 00:03:25.976 EAL: Detected lcore 61 as core 25 on socket 1 00:03:25.976 EAL: Detected lcore 62 as core 26 on socket 1 00:03:25.976 EAL: Detected lcore 63 as core 27 on socket 1 00:03:25.976 EAL: Detected lcore 64 as core 28 on socket 1 00:03:25.976 EAL: Detected lcore 65 as core 29 on socket 1 00:03:25.976 EAL: Detected lcore 66 as core 30 on socket 1 00:03:25.976 EAL: Detected lcore 67 as core 31 on socket 1 00:03:25.976 EAL: Detected lcore 68 as core 32 on socket 1 00:03:25.976 EAL: Detected lcore 69 as core 33 on socket 1 00:03:25.976 EAL: Detected lcore 70 as core 34 on socket 1 00:03:25.976 EAL: Detected lcore 71 as core 35 on socket 1 00:03:25.976 EAL: Detected lcore 72 as core 0 on socket 0 00:03:25.976 EAL: Detected lcore 73 as core 1 on socket 0 00:03:25.976 EAL: Detected lcore 74 as core 2 on socket 0 00:03:25.976 EAL: Detected lcore 75 as core 3 on socket 0 00:03:25.976 EAL: Detected lcore 76 as core 4 on socket 0 00:03:25.976 EAL: Detected lcore 77 as core 5 on socket 0 00:03:25.976 EAL: Detected lcore 78 as core 6 on socket 0 00:03:25.976 EAL: Detected lcore 79 as core 7 on socket 0 00:03:25.976 EAL: Detected lcore 80 as core 8 on socket 0 00:03:25.976 EAL: Detected lcore 81 as core 9 on socket 0 00:03:25.976 EAL: Detected lcore 82 as core 10 on socket 0 00:03:25.976 EAL: Detected lcore 83 as core 11 on socket 0 00:03:25.976 EAL: Detected lcore 84 as core 12 on socket 0 00:03:25.976 EAL: Detected lcore 85 as core 13 on socket 0 00:03:25.976 EAL: Detected lcore 86 as core 14 on socket 0 00:03:25.976 EAL: Detected lcore 87 as core 15 on socket 0 00:03:25.976 EAL: Detected lcore 88 as core 16 on socket 0 00:03:25.976 EAL: Detected lcore 89 as core 17 on socket 0 00:03:25.976 EAL: Detected lcore 90 as core 18 on socket 0 00:03:25.976 EAL: Detected lcore 91 as core 19 on socket 0 00:03:25.976 EAL: Detected lcore 92 as core 20 on socket 0 00:03:25.976 EAL: Detected lcore 93 as core 21 on socket 0 00:03:25.976 EAL: Detected lcore 94 as core 22 on socket 0 00:03:25.976 EAL: Detected lcore 95 as core 23 on socket 0 00:03:25.976 EAL: Detected lcore 96 as core 24 on socket 0 00:03:25.976 EAL: Detected lcore 97 as core 25 on socket 0 00:03:25.976 EAL: Detected lcore 98 as core 26 on socket 0 00:03:25.976 EAL: Detected lcore 99 as core 27 on socket 0 00:03:25.976 EAL: Detected lcore 100 as core 28 on socket 0 00:03:25.976 EAL: Detected lcore 101 as core 29 on socket 0 00:03:25.976 EAL: Detected lcore 102 as core 30 on socket 0 00:03:25.976 EAL: Detected lcore 103 as core 31 on socket 0 00:03:25.976 EAL: Detected lcore 104 as core 32 on socket 0 00:03:25.976 EAL: Detected lcore 105 as core 33 on socket 0 00:03:25.976 EAL: Detected lcore 106 as core 34 on socket 0 00:03:25.976 EAL: Detected lcore 107 as core 35 on socket 0 00:03:25.976 EAL: Detected lcore 108 as core 0 on socket 1 00:03:25.976 EAL: Detected lcore 109 as core 1 on socket 1 00:03:25.976 EAL: Detected lcore 110 as core 2 on socket 1 00:03:25.976 EAL: Detected lcore 111 as core 3 on socket 1 00:03:25.976 EAL: Detected lcore 112 as core 4 on socket 1 00:03:25.976 EAL: Detected lcore 113 as core 5 on socket 1 00:03:25.976 EAL: Detected lcore 114 as core 6 on socket 1 00:03:25.976 EAL: Detected lcore 115 as core 7 on socket 1 00:03:25.976 EAL: Detected lcore 116 as core 8 on socket 1 00:03:25.976 EAL: Detected lcore 117 as core 9 on socket 1 00:03:25.976 EAL: Detected lcore 118 as core 10 on socket 1 00:03:25.976 EAL: Detected lcore 119 as core 11 on socket 1 00:03:25.976 EAL: Detected lcore 120 as core 12 on socket 1 00:03:25.976 EAL: Detected lcore 121 as core 13 on socket 1 00:03:25.976 EAL: Detected lcore 122 as core 14 on socket 1 00:03:25.976 EAL: Detected lcore 123 as core 15 on socket 1 00:03:25.976 EAL: Detected lcore 124 as core 16 on socket 1 00:03:25.976 EAL: Detected lcore 125 as core 17 on socket 1 00:03:25.976 EAL: Detected lcore 126 as core 18 on socket 1 00:03:25.976 EAL: Detected lcore 127 as core 19 on socket 1 00:03:25.976 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:25.976 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:25.976 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:25.976 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:25.976 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:25.976 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:25.976 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:25.976 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:25.976 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:25.976 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:25.976 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:25.976 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:25.976 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:25.976 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:25.976 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:25.976 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:25.976 EAL: Maximum logical cores by configuration: 128 00:03:25.976 EAL: Detected CPU lcores: 128 00:03:25.976 EAL: Detected NUMA nodes: 2 00:03:25.976 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:25.976 EAL: Detected shared linkage of DPDK 00:03:25.976 EAL: No shared files mode enabled, IPC will be disabled 00:03:25.976 EAL: Bus pci wants IOVA as 'DC' 00:03:25.976 EAL: Buses did not request a specific IOVA mode. 00:03:25.976 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:25.976 EAL: Selected IOVA mode 'VA' 00:03:25.976 EAL: Probing VFIO support... 00:03:25.976 EAL: IOMMU type 1 (Type 1) is supported 00:03:25.976 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:25.976 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:25.976 EAL: VFIO support initialized 00:03:25.976 EAL: Ask a virtual area of 0x2e000 bytes 00:03:25.976 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:25.976 EAL: Setting up physically contiguous memory... 00:03:25.976 EAL: Setting maximum number of open files to 524288 00:03:25.976 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:25.976 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:25.976 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:25.976 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.976 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:25.976 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:25.976 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.976 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:25.976 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:25.976 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.976 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:25.976 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:25.976 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.976 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:25.976 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:25.976 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.976 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:25.976 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:25.976 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.976 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:25.976 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:25.976 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.976 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:25.976 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:25.976 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.976 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:25.976 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:25.976 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:25.976 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.976 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:25.976 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:25.976 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.976 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:25.976 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:25.976 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.976 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:25.976 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:25.976 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.976 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:25.976 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:25.976 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.976 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:25.976 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:25.976 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.976 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:25.976 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:25.976 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.976 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:25.976 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:25.976 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.976 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:25.976 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:25.976 EAL: Hugepages will be freed exactly as allocated. 00:03:25.976 EAL: No shared files mode enabled, IPC is disabled 00:03:25.976 EAL: No shared files mode enabled, IPC is disabled 00:03:25.976 EAL: TSC frequency is ~2400000 KHz 00:03:25.976 EAL: Main lcore 0 is ready (tid=7fcd431aba00;cpuset=[0]) 00:03:25.976 EAL: Trying to obtain current memory policy. 00:03:25.976 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.976 EAL: Restoring previous memory policy: 0 00:03:25.976 EAL: request: mp_malloc_sync 00:03:25.976 EAL: No shared files mode enabled, IPC is disabled 00:03:25.976 EAL: Heap on socket 0 was expanded by 2MB 00:03:25.976 EAL: No shared files mode enabled, IPC is disabled 00:03:25.976 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:25.976 EAL: Mem event callback 'spdk:(nil)' registered 00:03:26.236 00:03:26.236 00:03:26.236 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.236 http://cunit.sourceforge.net/ 00:03:26.236 00:03:26.236 00:03:26.236 Suite: components_suite 00:03:26.236 Test: vtophys_malloc_test ...passed 00:03:26.236 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:26.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.236 EAL: Restoring previous memory policy: 4 00:03:26.236 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.236 EAL: request: mp_malloc_sync 00:03:26.236 EAL: No shared files mode enabled, IPC is disabled 00:03:26.236 EAL: Heap on socket 0 was expanded by 4MB 00:03:26.236 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.236 EAL: request: mp_malloc_sync 00:03:26.236 EAL: No shared files mode enabled, IPC is disabled 00:03:26.236 EAL: Heap on socket 0 was shrunk by 4MB 00:03:26.236 EAL: Trying to obtain current memory policy. 00:03:26.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.236 EAL: Restoring previous memory policy: 4 00:03:26.236 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.236 EAL: request: mp_malloc_sync 00:03:26.237 EAL: No shared files mode enabled, IPC is disabled 00:03:26.237 EAL: Heap on socket 0 was expanded by 6MB 00:03:26.237 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.237 EAL: request: mp_malloc_sync 00:03:26.237 EAL: No shared files mode enabled, IPC is disabled 00:03:26.237 EAL: Heap on socket 0 was shrunk by 6MB 00:03:26.237 EAL: Trying to obtain current memory policy. 00:03:26.237 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.237 EAL: Restoring previous memory policy: 4 00:03:26.237 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.237 EAL: request: mp_malloc_sync 00:03:26.237 EAL: No shared files mode enabled, IPC is disabled 00:03:26.237 EAL: Heap on socket 0 was expanded by 10MB 00:03:26.237 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.237 EAL: request: mp_malloc_sync 00:03:26.237 EAL: No shared files mode enabled, IPC is disabled 00:03:26.237 EAL: Heap on socket 0 was shrunk by 10MB 00:03:26.237 EAL: Trying to obtain current memory policy. 00:03:26.237 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.237 EAL: Restoring previous memory policy: 4 00:03:26.237 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.237 EAL: request: mp_malloc_sync 00:03:26.237 EAL: No shared files mode enabled, IPC is disabled 00:03:26.237 EAL: Heap on socket 0 was expanded by 18MB 00:03:26.237 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.237 EAL: request: mp_malloc_sync 00:03:26.237 EAL: No shared files mode enabled, IPC is disabled 00:03:26.237 EAL: Heap on socket 0 was shrunk by 18MB 00:03:26.237 EAL: Trying to obtain current memory policy. 00:03:26.237 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.237 EAL: Restoring previous memory policy: 4 00:03:26.237 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.237 EAL: request: mp_malloc_sync 00:03:26.237 EAL: No shared files mode enabled, IPC is disabled 00:03:26.237 EAL: Heap on socket 0 was expanded by 34MB 00:03:26.237 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.237 EAL: request: mp_malloc_sync 00:03:26.237 EAL: No shared files mode enabled, IPC is disabled 00:03:26.237 EAL: Heap on socket 0 was shrunk by 34MB 00:03:26.237 EAL: Trying to obtain current memory policy. 00:03:26.237 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.237 EAL: Restoring previous memory policy: 4 00:03:26.237 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.237 EAL: request: mp_malloc_sync 00:03:26.237 EAL: No shared files mode enabled, IPC is disabled 00:03:26.237 EAL: Heap on socket 0 was expanded by 66MB 00:03:26.237 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.237 EAL: request: mp_malloc_sync 00:03:26.237 EAL: No shared files mode enabled, IPC is disabled 00:03:26.237 EAL: Heap on socket 0 was shrunk by 66MB 00:03:26.237 EAL: Trying to obtain current memory policy. 00:03:26.237 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.237 EAL: Restoring previous memory policy: 4 00:03:26.237 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.237 EAL: request: mp_malloc_sync 00:03:26.237 EAL: No shared files mode enabled, IPC is disabled 00:03:26.237 EAL: Heap on socket 0 was expanded by 130MB 00:03:26.237 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.237 EAL: request: mp_malloc_sync 00:03:26.237 EAL: No shared files mode enabled, IPC is disabled 00:03:26.237 EAL: Heap on socket 0 was shrunk by 130MB 00:03:26.237 EAL: Trying to obtain current memory policy. 00:03:26.237 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.237 EAL: Restoring previous memory policy: 4 00:03:26.237 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.237 EAL: request: mp_malloc_sync 00:03:26.237 EAL: No shared files mode enabled, IPC is disabled 00:03:26.237 EAL: Heap on socket 0 was expanded by 258MB 00:03:26.237 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.237 EAL: request: mp_malloc_sync 00:03:26.237 EAL: No shared files mode enabled, IPC is disabled 00:03:26.237 EAL: Heap on socket 0 was shrunk by 258MB 00:03:26.237 EAL: Trying to obtain current memory policy. 00:03:26.237 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.497 EAL: Restoring previous memory policy: 4 00:03:26.497 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.497 EAL: request: mp_malloc_sync 00:03:26.497 EAL: No shared files mode enabled, IPC is disabled 00:03:26.497 EAL: Heap on socket 0 was expanded by 514MB 00:03:26.497 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.497 EAL: request: mp_malloc_sync 00:03:26.497 EAL: No shared files mode enabled, IPC is disabled 00:03:26.497 EAL: Heap on socket 0 was shrunk by 514MB 00:03:26.497 EAL: Trying to obtain current memory policy. 00:03:26.497 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.757 EAL: Restoring previous memory policy: 4 00:03:26.757 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.757 EAL: request: mp_malloc_sync 00:03:26.757 EAL: No shared files mode enabled, IPC is disabled 00:03:26.757 EAL: Heap on socket 0 was expanded by 1026MB 00:03:26.757 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.757 EAL: request: mp_malloc_sync 00:03:26.757 EAL: No shared files mode enabled, IPC is disabled 00:03:26.757 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:26.757 passed 00:03:26.757 00:03:26.757 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.757 suites 1 1 n/a 0 0 00:03:26.757 tests 2 2 2 0 0 00:03:26.757 asserts 497 497 497 0 n/a 00:03:26.757 00:03:26.757 Elapsed time = 0.693 seconds 00:03:26.757 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.757 EAL: request: mp_malloc_sync 00:03:26.757 EAL: No shared files mode enabled, IPC is disabled 00:03:26.757 EAL: Heap on socket 0 was shrunk by 2MB 00:03:26.757 EAL: No shared files mode enabled, IPC is disabled 00:03:26.757 EAL: No shared files mode enabled, IPC is disabled 00:03:26.757 EAL: No shared files mode enabled, IPC is disabled 00:03:26.757 00:03:26.757 real 0m0.842s 00:03:26.757 user 0m0.439s 00:03:26.757 sys 0m0.377s 00:03:26.757 13:50:24 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:26.757 13:50:24 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:26.757 ************************************ 00:03:26.757 END TEST env_vtophys 00:03:26.757 ************************************ 00:03:26.757 13:50:25 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:26.757 13:50:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:26.757 13:50:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:26.757 13:50:25 env -- common/autotest_common.sh@10 -- # set +x 00:03:27.018 ************************************ 00:03:27.018 START TEST env_pci 00:03:27.018 ************************************ 00:03:27.018 13:50:25 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:27.018 00:03:27.018 00:03:27.018 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.018 http://cunit.sourceforge.net/ 00:03:27.018 00:03:27.018 00:03:27.018 Suite: pci 00:03:27.018 Test: pci_hook ...[2024-10-30 13:50:25.093803] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 781390 has claimed it 00:03:27.018 EAL: Cannot find device (10000:00:01.0) 00:03:27.018 EAL: Failed to attach device on primary process 00:03:27.018 passed 00:03:27.018 00:03:27.018 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.018 suites 1 1 n/a 0 0 00:03:27.018 tests 1 1 1 0 0 00:03:27.018 asserts 25 25 25 0 n/a 00:03:27.018 00:03:27.018 Elapsed time = 0.031 seconds 00:03:27.018 00:03:27.018 real 0m0.053s 00:03:27.018 user 0m0.018s 00:03:27.018 sys 0m0.034s 00:03:27.018 13:50:25 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:27.018 13:50:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:27.018 ************************************ 00:03:27.018 END TEST env_pci 00:03:27.018 ************************************ 00:03:27.018 13:50:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:27.018 13:50:25 env -- env/env.sh@15 -- # uname 00:03:27.018 13:50:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:27.018 13:50:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:27.018 13:50:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:27.019 13:50:25 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:27.019 13:50:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:27.019 13:50:25 env -- common/autotest_common.sh@10 -- # set +x 00:03:27.019 ************************************ 00:03:27.019 START TEST env_dpdk_post_init 00:03:27.019 ************************************ 00:03:27.019 13:50:25 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:27.019 EAL: Detected CPU lcores: 128 00:03:27.019 EAL: Detected NUMA nodes: 2 00:03:27.019 EAL: Detected shared linkage of DPDK 00:03:27.019 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:27.019 EAL: Selected IOVA mode 'VA' 00:03:27.019 EAL: VFIO support initialized 00:03:27.019 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:27.280 EAL: Using IOMMU type 1 (Type 1) 00:03:27.280 EAL: Ignore mapping IO port bar(1) 00:03:27.541 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:27.541 EAL: Ignore mapping IO port bar(1) 00:03:27.541 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:27.804 EAL: Ignore mapping IO port bar(1) 00:03:27.804 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:28.066 EAL: Ignore mapping IO port bar(1) 00:03:28.066 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:28.327 EAL: Ignore mapping IO port bar(1) 00:03:28.327 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:28.327 EAL: Ignore mapping IO port bar(1) 00:03:28.589 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:28.589 EAL: Ignore mapping IO port bar(1) 00:03:28.850 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:28.850 EAL: Ignore mapping IO port bar(1) 00:03:29.113 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:29.113 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:29.374 EAL: Ignore mapping IO port bar(1) 00:03:29.374 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:29.636 EAL: Ignore mapping IO port bar(1) 00:03:29.636 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:29.897 EAL: Ignore mapping IO port bar(1) 00:03:29.897 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:29.897 EAL: Ignore mapping IO port bar(1) 00:03:30.157 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:30.157 EAL: Ignore mapping IO port bar(1) 00:03:30.419 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:30.419 EAL: Ignore mapping IO port bar(1) 00:03:30.682 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:30.682 EAL: Ignore mapping IO port bar(1) 00:03:30.682 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:30.942 EAL: Ignore mapping IO port bar(1) 00:03:30.942 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:30.942 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:30.942 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:31.204 Starting DPDK initialization... 00:03:31.204 Starting SPDK post initialization... 00:03:31.204 SPDK NVMe probe 00:03:31.204 Attaching to 0000:65:00.0 00:03:31.204 Attached to 0000:65:00.0 00:03:31.204 Cleaning up... 00:03:33.121 00:03:33.121 real 0m5.761s 00:03:33.121 user 0m0.126s 00:03:33.121 sys 0m0.190s 00:03:33.121 13:50:30 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:33.121 13:50:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:33.121 ************************************ 00:03:33.121 END TEST env_dpdk_post_init 00:03:33.121 ************************************ 00:03:33.121 13:50:31 env -- env/env.sh@26 -- # uname 00:03:33.121 13:50:31 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:33.121 13:50:31 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:33.121 13:50:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.121 13:50:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.121 13:50:31 env -- common/autotest_common.sh@10 -- # set +x 00:03:33.121 ************************************ 00:03:33.121 START TEST env_mem_callbacks 00:03:33.121 ************************************ 00:03:33.121 13:50:31 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:33.121 EAL: Detected CPU lcores: 128 00:03:33.121 EAL: Detected NUMA nodes: 2 00:03:33.121 EAL: Detected shared linkage of DPDK 00:03:33.121 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:33.121 EAL: Selected IOVA mode 'VA' 00:03:33.121 EAL: VFIO support initialized 00:03:33.121 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:33.121 00:03:33.121 00:03:33.121 CUnit - A unit testing framework for C - Version 2.1-3 00:03:33.121 http://cunit.sourceforge.net/ 00:03:33.121 00:03:33.121 00:03:33.121 Suite: memory 00:03:33.121 Test: test ... 00:03:33.121 register 0x200000200000 2097152 00:03:33.121 malloc 3145728 00:03:33.121 register 0x200000400000 4194304 00:03:33.121 buf 0x200000500000 len 3145728 PASSED 00:03:33.121 malloc 64 00:03:33.121 buf 0x2000004fff40 len 64 PASSED 00:03:33.121 malloc 4194304 00:03:33.121 register 0x200000800000 6291456 00:03:33.121 buf 0x200000a00000 len 4194304 PASSED 00:03:33.121 free 0x200000500000 3145728 00:03:33.121 free 0x2000004fff40 64 00:03:33.121 unregister 0x200000400000 4194304 PASSED 00:03:33.121 free 0x200000a00000 4194304 00:03:33.121 unregister 0x200000800000 6291456 PASSED 00:03:33.121 malloc 8388608 00:03:33.122 register 0x200000400000 10485760 00:03:33.122 buf 0x200000600000 len 8388608 PASSED 00:03:33.122 free 0x200000600000 8388608 00:03:33.122 unregister 0x200000400000 10485760 PASSED 00:03:33.122 passed 00:03:33.122 00:03:33.122 Run Summary: Type Total Ran Passed Failed Inactive 00:03:33.122 suites 1 1 n/a 0 0 00:03:33.122 tests 1 1 1 0 0 00:03:33.122 asserts 15 15 15 0 n/a 00:03:33.122 00:03:33.122 Elapsed time = 0.010 seconds 00:03:33.122 00:03:33.122 real 0m0.068s 00:03:33.122 user 0m0.023s 00:03:33.122 sys 0m0.045s 00:03:33.122 13:50:31 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:33.122 13:50:31 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:33.122 ************************************ 00:03:33.122 END TEST env_mem_callbacks 00:03:33.122 ************************************ 00:03:33.122 00:03:33.122 real 0m7.551s 00:03:33.122 user 0m1.063s 00:03:33.122 sys 0m1.050s 00:03:33.122 13:50:31 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:33.122 13:50:31 env -- common/autotest_common.sh@10 -- # set +x 00:03:33.122 ************************************ 00:03:33.122 END TEST env 00:03:33.122 ************************************ 00:03:33.122 13:50:31 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:33.122 13:50:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.122 13:50:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.122 13:50:31 -- common/autotest_common.sh@10 -- # set +x 00:03:33.122 ************************************ 00:03:33.122 START TEST rpc 00:03:33.122 ************************************ 00:03:33.122 13:50:31 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:33.122 * Looking for test storage... 00:03:33.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:33.122 13:50:31 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:33.122 13:50:31 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:33.122 13:50:31 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:33.383 13:50:31 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:33.384 13:50:31 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:33.384 13:50:31 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:33.384 13:50:31 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:33.384 13:50:31 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:33.384 13:50:31 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:33.384 13:50:31 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:33.384 13:50:31 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:33.384 13:50:31 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:33.384 13:50:31 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:33.384 13:50:31 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:33.384 13:50:31 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:33.384 13:50:31 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:33.384 13:50:31 rpc -- scripts/common.sh@345 -- # : 1 00:03:33.384 13:50:31 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:33.384 13:50:31 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.384 13:50:31 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:33.384 13:50:31 rpc -- scripts/common.sh@353 -- # local d=1 00:03:33.384 13:50:31 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:33.384 13:50:31 rpc -- scripts/common.sh@355 -- # echo 1 00:03:33.384 13:50:31 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:33.384 13:50:31 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:33.384 13:50:31 rpc -- scripts/common.sh@353 -- # local d=2 00:03:33.384 13:50:31 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:33.384 13:50:31 rpc -- scripts/common.sh@355 -- # echo 2 00:03:33.384 13:50:31 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:33.384 13:50:31 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:33.384 13:50:31 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:33.384 13:50:31 rpc -- scripts/common.sh@368 -- # return 0 00:03:33.384 13:50:31 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:33.384 13:50:31 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:33.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.384 --rc genhtml_branch_coverage=1 00:03:33.384 --rc genhtml_function_coverage=1 00:03:33.384 --rc genhtml_legend=1 00:03:33.384 --rc geninfo_all_blocks=1 00:03:33.384 --rc geninfo_unexecuted_blocks=1 00:03:33.384 00:03:33.384 ' 00:03:33.384 13:50:31 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:33.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.384 --rc genhtml_branch_coverage=1 00:03:33.384 --rc genhtml_function_coverage=1 00:03:33.384 --rc genhtml_legend=1 00:03:33.384 --rc geninfo_all_blocks=1 00:03:33.384 --rc geninfo_unexecuted_blocks=1 00:03:33.384 00:03:33.384 ' 00:03:33.384 13:50:31 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:33.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.384 --rc genhtml_branch_coverage=1 00:03:33.384 --rc genhtml_function_coverage=1 00:03:33.384 --rc genhtml_legend=1 00:03:33.384 --rc geninfo_all_blocks=1 00:03:33.384 --rc geninfo_unexecuted_blocks=1 00:03:33.384 00:03:33.384 ' 00:03:33.384 13:50:31 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:33.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.384 --rc genhtml_branch_coverage=1 00:03:33.384 --rc genhtml_function_coverage=1 00:03:33.384 --rc genhtml_legend=1 00:03:33.384 --rc geninfo_all_blocks=1 00:03:33.384 --rc geninfo_unexecuted_blocks=1 00:03:33.384 00:03:33.384 ' 00:03:33.384 13:50:31 rpc -- rpc/rpc.sh@65 -- # spdk_pid=782728 00:03:33.384 13:50:31 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:33.384 13:50:31 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:33.384 13:50:31 rpc -- rpc/rpc.sh@67 -- # waitforlisten 782728 00:03:33.384 13:50:31 rpc -- common/autotest_common.sh@835 -- # '[' -z 782728 ']' 00:03:33.384 13:50:31 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:33.384 13:50:31 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:33.384 13:50:31 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:33.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:33.384 13:50:31 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:33.384 13:50:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:33.384 [2024-10-30 13:50:31.522779] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:03:33.384 [2024-10-30 13:50:31.522850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782728 ] 00:03:33.384 [2024-10-30 13:50:31.613782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:33.384 [2024-10-30 13:50:31.665764] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:33.384 [2024-10-30 13:50:31.665810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 782728' to capture a snapshot of events at runtime. 00:03:33.384 [2024-10-30 13:50:31.665819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:33.384 [2024-10-30 13:50:31.665827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:33.384 [2024-10-30 13:50:31.665833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid782728 for offline analysis/debug. 00:03:33.384 [2024-10-30 13:50:31.666623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:34.327 13:50:32 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:34.327 13:50:32 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:34.327 13:50:32 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:34.327 13:50:32 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:34.327 13:50:32 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:34.327 13:50:32 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:34.327 13:50:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.327 13:50:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.327 13:50:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.327 ************************************ 00:03:34.327 START TEST rpc_integrity 00:03:34.327 ************************************ 00:03:34.327 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:34.327 13:50:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:34.327 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.327 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.327 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.327 13:50:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:34.327 13:50:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:34.327 13:50:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:34.327 13:50:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:34.327 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.327 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.327 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.327 13:50:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:34.327 13:50:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:34.327 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.327 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.327 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.327 13:50:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:34.327 { 00:03:34.327 "name": "Malloc0", 00:03:34.327 "aliases": [ 00:03:34.327 "f47df251-e5ac-4a23-8813-6201346f7dc5" 00:03:34.327 ], 00:03:34.327 "product_name": "Malloc disk", 00:03:34.327 "block_size": 512, 00:03:34.327 "num_blocks": 16384, 00:03:34.327 "uuid": "f47df251-e5ac-4a23-8813-6201346f7dc5", 00:03:34.327 "assigned_rate_limits": { 00:03:34.327 "rw_ios_per_sec": 0, 00:03:34.327 "rw_mbytes_per_sec": 0, 00:03:34.327 "r_mbytes_per_sec": 0, 00:03:34.327 "w_mbytes_per_sec": 0 00:03:34.327 }, 00:03:34.327 "claimed": false, 00:03:34.327 "zoned": false, 00:03:34.327 "supported_io_types": { 00:03:34.327 "read": true, 00:03:34.327 "write": true, 00:03:34.327 "unmap": true, 00:03:34.327 "flush": true, 00:03:34.327 "reset": true, 00:03:34.327 "nvme_admin": false, 00:03:34.327 "nvme_io": false, 00:03:34.327 "nvme_io_md": false, 00:03:34.327 "write_zeroes": true, 00:03:34.327 "zcopy": true, 00:03:34.327 "get_zone_info": false, 00:03:34.327 "zone_management": false, 00:03:34.327 "zone_append": false, 00:03:34.327 "compare": false, 00:03:34.327 "compare_and_write": false, 00:03:34.327 "abort": true, 00:03:34.327 "seek_hole": false, 00:03:34.327 "seek_data": false, 00:03:34.327 "copy": true, 00:03:34.327 "nvme_iov_md": false 00:03:34.327 }, 00:03:34.327 "memory_domains": [ 00:03:34.327 { 00:03:34.327 "dma_device_id": "system", 00:03:34.327 "dma_device_type": 1 00:03:34.327 }, 00:03:34.327 { 00:03:34.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.327 "dma_device_type": 2 00:03:34.327 } 00:03:34.327 ], 00:03:34.327 "driver_specific": {} 00:03:34.327 } 00:03:34.327 ]' 00:03:34.327 13:50:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:34.327 13:50:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:34.327 13:50:32 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:34.327 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.327 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.327 [2024-10-30 13:50:32.510780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:34.327 [2024-10-30 13:50:32.510824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:34.327 [2024-10-30 13:50:32.510841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fd32f0 00:03:34.327 [2024-10-30 13:50:32.510849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:34.327 [2024-10-30 13:50:32.512391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:34.327 [2024-10-30 13:50:32.512432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:34.327 Passthru0 00:03:34.327 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.327 13:50:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:34.327 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.327 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.327 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.327 13:50:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:34.327 { 00:03:34.327 "name": "Malloc0", 00:03:34.327 "aliases": [ 00:03:34.327 "f47df251-e5ac-4a23-8813-6201346f7dc5" 00:03:34.327 ], 00:03:34.327 "product_name": "Malloc disk", 00:03:34.327 "block_size": 512, 00:03:34.327 "num_blocks": 16384, 00:03:34.327 "uuid": "f47df251-e5ac-4a23-8813-6201346f7dc5", 00:03:34.327 "assigned_rate_limits": { 00:03:34.327 "rw_ios_per_sec": 0, 00:03:34.327 "rw_mbytes_per_sec": 0, 00:03:34.327 "r_mbytes_per_sec": 0, 00:03:34.327 "w_mbytes_per_sec": 0 00:03:34.327 }, 00:03:34.327 "claimed": true, 00:03:34.327 "claim_type": "exclusive_write", 00:03:34.327 "zoned": false, 00:03:34.327 "supported_io_types": { 00:03:34.327 "read": true, 00:03:34.327 "write": true, 00:03:34.327 "unmap": true, 00:03:34.327 "flush": true, 00:03:34.327 "reset": true, 00:03:34.327 "nvme_admin": false, 00:03:34.327 "nvme_io": false, 00:03:34.327 "nvme_io_md": false, 00:03:34.327 "write_zeroes": true, 00:03:34.327 "zcopy": true, 00:03:34.327 "get_zone_info": false, 00:03:34.327 "zone_management": false, 00:03:34.327 "zone_append": false, 00:03:34.327 "compare": false, 00:03:34.327 "compare_and_write": false, 00:03:34.327 "abort": true, 00:03:34.327 "seek_hole": false, 00:03:34.327 "seek_data": false, 00:03:34.327 "copy": true, 00:03:34.327 "nvme_iov_md": false 00:03:34.327 }, 00:03:34.327 "memory_domains": [ 00:03:34.327 { 00:03:34.327 "dma_device_id": "system", 00:03:34.327 "dma_device_type": 1 00:03:34.327 }, 00:03:34.327 { 00:03:34.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.327 "dma_device_type": 2 00:03:34.327 } 00:03:34.327 ], 00:03:34.327 "driver_specific": {} 00:03:34.327 }, 00:03:34.327 { 00:03:34.327 "name": "Passthru0", 00:03:34.327 "aliases": [ 00:03:34.327 "21e4fba8-d06d-5ef6-baa4-db808df7b990" 00:03:34.327 ], 00:03:34.327 "product_name": "passthru", 00:03:34.327 "block_size": 512, 00:03:34.327 "num_blocks": 16384, 00:03:34.327 "uuid": "21e4fba8-d06d-5ef6-baa4-db808df7b990", 00:03:34.327 "assigned_rate_limits": { 00:03:34.327 "rw_ios_per_sec": 0, 00:03:34.327 "rw_mbytes_per_sec": 0, 00:03:34.327 "r_mbytes_per_sec": 0, 00:03:34.327 "w_mbytes_per_sec": 0 00:03:34.327 }, 00:03:34.327 "claimed": false, 00:03:34.327 "zoned": false, 00:03:34.327 "supported_io_types": { 00:03:34.327 "read": true, 00:03:34.327 "write": true, 00:03:34.327 "unmap": true, 00:03:34.327 "flush": true, 00:03:34.327 "reset": true, 00:03:34.327 "nvme_admin": false, 00:03:34.327 "nvme_io": false, 00:03:34.327 "nvme_io_md": false, 00:03:34.327 "write_zeroes": true, 00:03:34.327 "zcopy": true, 00:03:34.327 "get_zone_info": false, 00:03:34.327 "zone_management": false, 00:03:34.327 "zone_append": false, 00:03:34.327 "compare": false, 00:03:34.327 "compare_and_write": false, 00:03:34.327 "abort": true, 00:03:34.327 "seek_hole": false, 00:03:34.327 "seek_data": false, 00:03:34.327 "copy": true, 00:03:34.327 "nvme_iov_md": false 00:03:34.327 }, 00:03:34.327 "memory_domains": [ 00:03:34.327 { 00:03:34.327 "dma_device_id": "system", 00:03:34.327 "dma_device_type": 1 00:03:34.327 }, 00:03:34.327 { 00:03:34.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.327 "dma_device_type": 2 00:03:34.327 } 00:03:34.327 ], 00:03:34.327 "driver_specific": { 00:03:34.327 "passthru": { 00:03:34.327 "name": "Passthru0", 00:03:34.327 "base_bdev_name": "Malloc0" 00:03:34.327 } 00:03:34.327 } 00:03:34.327 } 00:03:34.327 ]' 00:03:34.327 13:50:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:34.327 13:50:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:34.327 13:50:32 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:34.327 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.328 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.328 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.328 13:50:32 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:34.328 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.328 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.328 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.328 13:50:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:34.328 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.328 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.328 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.328 13:50:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:34.328 13:50:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:34.589 13:50:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:34.589 00:03:34.589 real 0m0.300s 00:03:34.589 user 0m0.192s 00:03:34.589 sys 0m0.044s 00:03:34.589 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:34.589 13:50:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.589 ************************************ 00:03:34.589 END TEST rpc_integrity 00:03:34.589 ************************************ 00:03:34.589 13:50:32 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:34.589 13:50:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.589 13:50:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.589 13:50:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.589 ************************************ 00:03:34.589 START TEST rpc_plugins 00:03:34.589 ************************************ 00:03:34.589 13:50:32 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:34.589 13:50:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:34.589 13:50:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.589 13:50:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.589 13:50:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.589 13:50:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:34.589 13:50:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:34.589 13:50:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.589 13:50:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.589 13:50:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.589 13:50:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:34.589 { 00:03:34.589 "name": "Malloc1", 00:03:34.589 "aliases": [ 00:03:34.589 "722856ad-d1a1-4ce8-a819-195ec93779d6" 00:03:34.589 ], 00:03:34.589 "product_name": "Malloc disk", 00:03:34.589 "block_size": 4096, 00:03:34.589 "num_blocks": 256, 00:03:34.589 "uuid": "722856ad-d1a1-4ce8-a819-195ec93779d6", 00:03:34.589 "assigned_rate_limits": { 00:03:34.589 "rw_ios_per_sec": 0, 00:03:34.589 "rw_mbytes_per_sec": 0, 00:03:34.589 "r_mbytes_per_sec": 0, 00:03:34.589 "w_mbytes_per_sec": 0 00:03:34.589 }, 00:03:34.589 "claimed": false, 00:03:34.589 "zoned": false, 00:03:34.589 "supported_io_types": { 00:03:34.589 "read": true, 00:03:34.589 "write": true, 00:03:34.589 "unmap": true, 00:03:34.589 "flush": true, 00:03:34.589 "reset": true, 00:03:34.589 "nvme_admin": false, 00:03:34.589 "nvme_io": false, 00:03:34.589 "nvme_io_md": false, 00:03:34.589 "write_zeroes": true, 00:03:34.589 "zcopy": true, 00:03:34.589 "get_zone_info": false, 00:03:34.589 "zone_management": false, 00:03:34.589 "zone_append": false, 00:03:34.589 "compare": false, 00:03:34.589 "compare_and_write": false, 00:03:34.589 "abort": true, 00:03:34.589 "seek_hole": false, 00:03:34.589 "seek_data": false, 00:03:34.589 "copy": true, 00:03:34.589 "nvme_iov_md": false 00:03:34.589 }, 00:03:34.589 "memory_domains": [ 00:03:34.589 { 00:03:34.589 "dma_device_id": "system", 00:03:34.589 "dma_device_type": 1 00:03:34.589 }, 00:03:34.589 { 00:03:34.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.589 "dma_device_type": 2 00:03:34.589 } 00:03:34.589 ], 00:03:34.589 "driver_specific": {} 00:03:34.589 } 00:03:34.589 ]' 00:03:34.589 13:50:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:34.589 13:50:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:34.589 13:50:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:34.589 13:50:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.589 13:50:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.589 13:50:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.589 13:50:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:34.589 13:50:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.589 13:50:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.589 13:50:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.589 13:50:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:34.589 13:50:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:34.850 13:50:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:34.850 00:03:34.850 real 0m0.150s 00:03:34.850 user 0m0.094s 00:03:34.850 sys 0m0.021s 00:03:34.850 13:50:32 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:34.850 13:50:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.850 ************************************ 00:03:34.850 END TEST rpc_plugins 00:03:34.850 ************************************ 00:03:34.850 13:50:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:34.850 13:50:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.851 13:50:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.851 13:50:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.851 ************************************ 00:03:34.851 START TEST rpc_trace_cmd_test 00:03:34.851 ************************************ 00:03:34.851 13:50:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:34.851 13:50:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:34.851 13:50:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:34.851 13:50:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.851 13:50:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:34.851 13:50:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.851 13:50:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:34.851 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid782728", 00:03:34.851 "tpoint_group_mask": "0x8", 00:03:34.851 "iscsi_conn": { 00:03:34.851 "mask": "0x2", 00:03:34.851 "tpoint_mask": "0x0" 00:03:34.851 }, 00:03:34.851 "scsi": { 00:03:34.851 "mask": "0x4", 00:03:34.851 "tpoint_mask": "0x0" 00:03:34.851 }, 00:03:34.851 "bdev": { 00:03:34.851 "mask": "0x8", 00:03:34.851 "tpoint_mask": "0xffffffffffffffff" 00:03:34.851 }, 00:03:34.851 "nvmf_rdma": { 00:03:34.851 "mask": "0x10", 00:03:34.851 "tpoint_mask": "0x0" 00:03:34.851 }, 00:03:34.851 "nvmf_tcp": { 00:03:34.851 "mask": "0x20", 00:03:34.851 "tpoint_mask": "0x0" 00:03:34.851 }, 00:03:34.851 "ftl": { 00:03:34.851 "mask": "0x40", 00:03:34.851 "tpoint_mask": "0x0" 00:03:34.851 }, 00:03:34.851 "blobfs": { 00:03:34.851 "mask": "0x80", 00:03:34.851 "tpoint_mask": "0x0" 00:03:34.851 }, 00:03:34.851 "dsa": { 00:03:34.851 "mask": "0x200", 00:03:34.851 "tpoint_mask": "0x0" 00:03:34.851 }, 00:03:34.851 "thread": { 00:03:34.851 "mask": "0x400", 00:03:34.851 "tpoint_mask": "0x0" 00:03:34.851 }, 00:03:34.851 "nvme_pcie": { 00:03:34.851 "mask": "0x800", 00:03:34.851 "tpoint_mask": "0x0" 00:03:34.851 }, 00:03:34.851 "iaa": { 00:03:34.851 "mask": "0x1000", 00:03:34.851 "tpoint_mask": "0x0" 00:03:34.851 }, 00:03:34.851 "nvme_tcp": { 00:03:34.851 "mask": "0x2000", 00:03:34.851 "tpoint_mask": "0x0" 00:03:34.851 }, 00:03:34.851 "bdev_nvme": { 00:03:34.851 "mask": "0x4000", 00:03:34.851 "tpoint_mask": "0x0" 00:03:34.851 }, 00:03:34.851 "sock": { 00:03:34.851 "mask": "0x8000", 00:03:34.851 "tpoint_mask": "0x0" 00:03:34.851 }, 00:03:34.851 "blob": { 00:03:34.851 "mask": "0x10000", 00:03:34.851 "tpoint_mask": "0x0" 00:03:34.851 }, 00:03:34.851 "bdev_raid": { 00:03:34.851 "mask": "0x20000", 00:03:34.851 "tpoint_mask": "0x0" 00:03:34.851 }, 00:03:34.851 "scheduler": { 00:03:34.851 "mask": "0x40000", 00:03:34.851 "tpoint_mask": "0x0" 00:03:34.851 } 00:03:34.851 }' 00:03:34.851 13:50:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:34.851 13:50:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:34.851 13:50:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:34.851 13:50:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:34.851 13:50:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:34.851 13:50:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:34.851 13:50:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:35.112 13:50:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:35.112 13:50:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:35.112 13:50:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:35.112 00:03:35.112 real 0m0.230s 00:03:35.112 user 0m0.193s 00:03:35.112 sys 0m0.029s 00:03:35.112 13:50:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.112 13:50:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:35.112 ************************************ 00:03:35.112 END TEST rpc_trace_cmd_test 00:03:35.112 ************************************ 00:03:35.112 13:50:33 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:35.112 13:50:33 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:35.112 13:50:33 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:35.112 13:50:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.112 13:50:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.112 13:50:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.112 ************************************ 00:03:35.112 START TEST rpc_daemon_integrity 00:03:35.112 ************************************ 00:03:35.112 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:35.112 13:50:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:35.112 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.112 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.112 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.112 13:50:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:35.112 13:50:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:35.112 13:50:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:35.112 13:50:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:35.112 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.112 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.112 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.112 13:50:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:35.112 13:50:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:35.112 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.112 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.112 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.112 13:50:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:35.112 { 00:03:35.112 "name": "Malloc2", 00:03:35.112 "aliases": [ 00:03:35.112 "42a5ce57-5768-4a9c-9fae-093f928a22fe" 00:03:35.112 ], 00:03:35.112 "product_name": "Malloc disk", 00:03:35.112 "block_size": 512, 00:03:35.112 "num_blocks": 16384, 00:03:35.112 "uuid": "42a5ce57-5768-4a9c-9fae-093f928a22fe", 00:03:35.112 "assigned_rate_limits": { 00:03:35.112 "rw_ios_per_sec": 0, 00:03:35.112 "rw_mbytes_per_sec": 0, 00:03:35.112 "r_mbytes_per_sec": 0, 00:03:35.112 "w_mbytes_per_sec": 0 00:03:35.112 }, 00:03:35.112 "claimed": false, 00:03:35.112 "zoned": false, 00:03:35.112 "supported_io_types": { 00:03:35.112 "read": true, 00:03:35.112 "write": true, 00:03:35.112 "unmap": true, 00:03:35.112 "flush": true, 00:03:35.112 "reset": true, 00:03:35.112 "nvme_admin": false, 00:03:35.112 "nvme_io": false, 00:03:35.112 "nvme_io_md": false, 00:03:35.112 "write_zeroes": true, 00:03:35.112 "zcopy": true, 00:03:35.112 "get_zone_info": false, 00:03:35.112 "zone_management": false, 00:03:35.112 "zone_append": false, 00:03:35.112 "compare": false, 00:03:35.112 "compare_and_write": false, 00:03:35.112 "abort": true, 00:03:35.112 "seek_hole": false, 00:03:35.112 "seek_data": false, 00:03:35.112 "copy": true, 00:03:35.112 "nvme_iov_md": false 00:03:35.112 }, 00:03:35.112 "memory_domains": [ 00:03:35.112 { 00:03:35.112 "dma_device_id": "system", 00:03:35.112 "dma_device_type": 1 00:03:35.112 }, 00:03:35.112 { 00:03:35.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:35.112 "dma_device_type": 2 00:03:35.112 } 00:03:35.112 ], 00:03:35.112 "driver_specific": {} 00:03:35.112 } 00:03:35.112 ]' 00:03:35.112 13:50:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:35.373 13:50:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:35.373 13:50:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:35.373 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.373 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.373 [2024-10-30 13:50:33.429280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:35.374 [2024-10-30 13:50:33.429322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:35.374 [2024-10-30 13:50:33.429340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2104000 00:03:35.374 [2024-10-30 13:50:33.429347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:35.374 [2024-10-30 13:50:33.430795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:35.374 [2024-10-30 13:50:33.430829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:35.374 Passthru0 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:35.374 { 00:03:35.374 "name": "Malloc2", 00:03:35.374 "aliases": [ 00:03:35.374 "42a5ce57-5768-4a9c-9fae-093f928a22fe" 00:03:35.374 ], 00:03:35.374 "product_name": "Malloc disk", 00:03:35.374 "block_size": 512, 00:03:35.374 "num_blocks": 16384, 00:03:35.374 "uuid": "42a5ce57-5768-4a9c-9fae-093f928a22fe", 00:03:35.374 "assigned_rate_limits": { 00:03:35.374 "rw_ios_per_sec": 0, 00:03:35.374 "rw_mbytes_per_sec": 0, 00:03:35.374 "r_mbytes_per_sec": 0, 00:03:35.374 "w_mbytes_per_sec": 0 00:03:35.374 }, 00:03:35.374 "claimed": true, 00:03:35.374 "claim_type": "exclusive_write", 00:03:35.374 "zoned": false, 00:03:35.374 "supported_io_types": { 00:03:35.374 "read": true, 00:03:35.374 "write": true, 00:03:35.374 "unmap": true, 00:03:35.374 "flush": true, 00:03:35.374 "reset": true, 00:03:35.374 "nvme_admin": false, 00:03:35.374 "nvme_io": false, 00:03:35.374 "nvme_io_md": false, 00:03:35.374 "write_zeroes": true, 00:03:35.374 "zcopy": true, 00:03:35.374 "get_zone_info": false, 00:03:35.374 "zone_management": false, 00:03:35.374 "zone_append": false, 00:03:35.374 "compare": false, 00:03:35.374 "compare_and_write": false, 00:03:35.374 "abort": true, 00:03:35.374 "seek_hole": false, 00:03:35.374 "seek_data": false, 00:03:35.374 "copy": true, 00:03:35.374 "nvme_iov_md": false 00:03:35.374 }, 00:03:35.374 "memory_domains": [ 00:03:35.374 { 00:03:35.374 "dma_device_id": "system", 00:03:35.374 "dma_device_type": 1 00:03:35.374 }, 00:03:35.374 { 00:03:35.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:35.374 "dma_device_type": 2 00:03:35.374 } 00:03:35.374 ], 00:03:35.374 "driver_specific": {} 00:03:35.374 }, 00:03:35.374 { 00:03:35.374 "name": "Passthru0", 00:03:35.374 "aliases": [ 00:03:35.374 "da158ee6-754d-5c6b-971b-37a52fc89b40" 00:03:35.374 ], 00:03:35.374 "product_name": "passthru", 00:03:35.374 "block_size": 512, 00:03:35.374 "num_blocks": 16384, 00:03:35.374 "uuid": "da158ee6-754d-5c6b-971b-37a52fc89b40", 00:03:35.374 "assigned_rate_limits": { 00:03:35.374 "rw_ios_per_sec": 0, 00:03:35.374 "rw_mbytes_per_sec": 0, 00:03:35.374 "r_mbytes_per_sec": 0, 00:03:35.374 "w_mbytes_per_sec": 0 00:03:35.374 }, 00:03:35.374 "claimed": false, 00:03:35.374 "zoned": false, 00:03:35.374 "supported_io_types": { 00:03:35.374 "read": true, 00:03:35.374 "write": true, 00:03:35.374 "unmap": true, 00:03:35.374 "flush": true, 00:03:35.374 "reset": true, 00:03:35.374 "nvme_admin": false, 00:03:35.374 "nvme_io": false, 00:03:35.374 "nvme_io_md": false, 00:03:35.374 "write_zeroes": true, 00:03:35.374 "zcopy": true, 00:03:35.374 "get_zone_info": false, 00:03:35.374 "zone_management": false, 00:03:35.374 "zone_append": false, 00:03:35.374 "compare": false, 00:03:35.374 "compare_and_write": false, 00:03:35.374 "abort": true, 00:03:35.374 "seek_hole": false, 00:03:35.374 "seek_data": false, 00:03:35.374 "copy": true, 00:03:35.374 "nvme_iov_md": false 00:03:35.374 }, 00:03:35.374 "memory_domains": [ 00:03:35.374 { 00:03:35.374 "dma_device_id": "system", 00:03:35.374 "dma_device_type": 1 00:03:35.374 }, 00:03:35.374 { 00:03:35.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:35.374 "dma_device_type": 2 00:03:35.374 } 00:03:35.374 ], 00:03:35.374 "driver_specific": { 00:03:35.374 "passthru": { 00:03:35.374 "name": "Passthru0", 00:03:35.374 "base_bdev_name": "Malloc2" 00:03:35.374 } 00:03:35.374 } 00:03:35.374 } 00:03:35.374 ]' 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:35.374 00:03:35.374 real 0m0.302s 00:03:35.374 user 0m0.185s 00:03:35.374 sys 0m0.049s 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.374 13:50:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.374 ************************************ 00:03:35.374 END TEST rpc_daemon_integrity 00:03:35.374 ************************************ 00:03:35.374 13:50:33 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:35.374 13:50:33 rpc -- rpc/rpc.sh@84 -- # killprocess 782728 00:03:35.374 13:50:33 rpc -- common/autotest_common.sh@954 -- # '[' -z 782728 ']' 00:03:35.374 13:50:33 rpc -- common/autotest_common.sh@958 -- # kill -0 782728 00:03:35.374 13:50:33 rpc -- common/autotest_common.sh@959 -- # uname 00:03:35.374 13:50:33 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:35.374 13:50:33 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 782728 00:03:35.635 13:50:33 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:35.635 13:50:33 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:35.635 13:50:33 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 782728' 00:03:35.635 killing process with pid 782728 00:03:35.635 13:50:33 rpc -- common/autotest_common.sh@973 -- # kill 782728 00:03:35.635 13:50:33 rpc -- common/autotest_common.sh@978 -- # wait 782728 00:03:35.635 00:03:35.635 real 0m2.672s 00:03:35.635 user 0m3.383s 00:03:35.635 sys 0m0.832s 00:03:35.635 13:50:33 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.635 13:50:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.635 ************************************ 00:03:35.635 END TEST rpc 00:03:35.635 ************************************ 00:03:35.897 13:50:33 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:35.897 13:50:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.897 13:50:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.897 13:50:33 -- common/autotest_common.sh@10 -- # set +x 00:03:35.897 ************************************ 00:03:35.897 START TEST skip_rpc 00:03:35.897 ************************************ 00:03:35.897 13:50:34 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:35.897 * Looking for test storage... 00:03:35.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:35.897 13:50:34 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:35.897 13:50:34 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:35.897 13:50:34 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:36.158 13:50:34 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:36.158 13:50:34 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:36.158 13:50:34 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:36.158 13:50:34 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:36.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.158 --rc genhtml_branch_coverage=1 00:03:36.158 --rc genhtml_function_coverage=1 00:03:36.158 --rc genhtml_legend=1 00:03:36.158 --rc geninfo_all_blocks=1 00:03:36.158 --rc geninfo_unexecuted_blocks=1 00:03:36.158 00:03:36.158 ' 00:03:36.158 13:50:34 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:36.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.158 --rc genhtml_branch_coverage=1 00:03:36.158 --rc genhtml_function_coverage=1 00:03:36.158 --rc genhtml_legend=1 00:03:36.158 --rc geninfo_all_blocks=1 00:03:36.158 --rc geninfo_unexecuted_blocks=1 00:03:36.158 00:03:36.158 ' 00:03:36.158 13:50:34 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:36.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.158 --rc genhtml_branch_coverage=1 00:03:36.158 --rc genhtml_function_coverage=1 00:03:36.158 --rc genhtml_legend=1 00:03:36.158 --rc geninfo_all_blocks=1 00:03:36.158 --rc geninfo_unexecuted_blocks=1 00:03:36.158 00:03:36.158 ' 00:03:36.158 13:50:34 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:36.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.158 --rc genhtml_branch_coverage=1 00:03:36.158 --rc genhtml_function_coverage=1 00:03:36.158 --rc genhtml_legend=1 00:03:36.158 --rc geninfo_all_blocks=1 00:03:36.158 --rc geninfo_unexecuted_blocks=1 00:03:36.158 00:03:36.158 ' 00:03:36.158 13:50:34 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:36.158 13:50:34 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:36.158 13:50:34 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:36.158 13:50:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:36.158 13:50:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:36.158 13:50:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.158 ************************************ 00:03:36.158 START TEST skip_rpc 00:03:36.158 ************************************ 00:03:36.158 13:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:36.158 13:50:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=783572 00:03:36.158 13:50:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:36.158 13:50:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:36.158 13:50:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:36.158 [2024-10-30 13:50:34.323652] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:03:36.158 [2024-10-30 13:50:34.323716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783572 ] 00:03:36.158 [2024-10-30 13:50:34.416394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:36.419 [2024-10-30 13:50:34.469301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 783572 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 783572 ']' 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 783572 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 783572 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 783572' 00:03:41.710 killing process with pid 783572 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 783572 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 783572 00:03:41.710 00:03:41.710 real 0m5.264s 00:03:41.710 user 0m5.020s 00:03:41.710 sys 0m0.294s 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:41.710 13:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.710 ************************************ 00:03:41.710 END TEST skip_rpc 00:03:41.710 ************************************ 00:03:41.710 13:50:39 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:41.710 13:50:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.710 13:50:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.710 13:50:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.710 ************************************ 00:03:41.710 START TEST skip_rpc_with_json 00:03:41.710 ************************************ 00:03:41.710 13:50:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:41.710 13:50:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:41.710 13:50:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=784617 00:03:41.710 13:50:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:41.710 13:50:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 784617 00:03:41.710 13:50:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:41.710 13:50:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 784617 ']' 00:03:41.710 13:50:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:41.710 13:50:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:41.710 13:50:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:41.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:41.710 13:50:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:41.710 13:50:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.710 [2024-10-30 13:50:39.659150] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:03:41.710 [2024-10-30 13:50:39.659200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784617 ] 00:03:41.710 [2024-10-30 13:50:39.743510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:41.710 [2024-10-30 13:50:39.774248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:42.282 13:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:42.282 13:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:42.282 13:50:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:42.282 13:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.282 13:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:42.282 [2024-10-30 13:50:40.445613] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:42.282 request: 00:03:42.282 { 00:03:42.282 "trtype": "tcp", 00:03:42.282 "method": "nvmf_get_transports", 00:03:42.282 "req_id": 1 00:03:42.282 } 00:03:42.282 Got JSON-RPC error response 00:03:42.282 response: 00:03:42.282 { 00:03:42.282 "code": -19, 00:03:42.282 "message": "No such device" 00:03:42.282 } 00:03:42.282 13:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:42.282 13:50:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:42.282 13:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.282 13:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:42.282 [2024-10-30 13:50:40.457707] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:42.282 13:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.282 13:50:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:42.282 13:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.282 13:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:42.544 13:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.544 13:50:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:42.544 { 00:03:42.544 "subsystems": [ 00:03:42.544 { 00:03:42.544 "subsystem": "fsdev", 00:03:42.544 "config": [ 00:03:42.544 { 00:03:42.544 "method": "fsdev_set_opts", 00:03:42.544 "params": { 00:03:42.544 "fsdev_io_pool_size": 65535, 00:03:42.544 "fsdev_io_cache_size": 256 00:03:42.544 } 00:03:42.544 } 00:03:42.544 ] 00:03:42.544 }, 00:03:42.544 { 00:03:42.544 "subsystem": "vfio_user_target", 00:03:42.544 "config": null 00:03:42.544 }, 00:03:42.544 { 00:03:42.544 "subsystem": "keyring", 00:03:42.544 "config": [] 00:03:42.544 }, 00:03:42.544 { 00:03:42.544 "subsystem": "iobuf", 00:03:42.544 "config": [ 00:03:42.544 { 00:03:42.544 "method": "iobuf_set_options", 00:03:42.544 "params": { 00:03:42.544 "small_pool_count": 8192, 00:03:42.544 "large_pool_count": 1024, 00:03:42.544 "small_bufsize": 8192, 00:03:42.544 "large_bufsize": 135168, 00:03:42.544 "enable_numa": false 00:03:42.544 } 00:03:42.544 } 00:03:42.544 ] 00:03:42.544 }, 00:03:42.544 { 00:03:42.544 "subsystem": "sock", 00:03:42.544 "config": [ 00:03:42.544 { 00:03:42.544 "method": "sock_set_default_impl", 00:03:42.544 "params": { 00:03:42.544 "impl_name": "posix" 00:03:42.544 } 00:03:42.544 }, 00:03:42.544 { 00:03:42.544 "method": "sock_impl_set_options", 00:03:42.544 "params": { 00:03:42.544 "impl_name": "ssl", 00:03:42.544 "recv_buf_size": 4096, 00:03:42.544 "send_buf_size": 4096, 00:03:42.544 "enable_recv_pipe": true, 00:03:42.544 "enable_quickack": false, 00:03:42.544 "enable_placement_id": 0, 00:03:42.544 "enable_zerocopy_send_server": true, 00:03:42.544 "enable_zerocopy_send_client": false, 00:03:42.544 "zerocopy_threshold": 0, 00:03:42.544 "tls_version": 0, 00:03:42.544 "enable_ktls": false 00:03:42.544 } 00:03:42.544 }, 00:03:42.544 { 00:03:42.544 "method": "sock_impl_set_options", 00:03:42.544 "params": { 00:03:42.544 "impl_name": "posix", 00:03:42.544 "recv_buf_size": 2097152, 00:03:42.544 "send_buf_size": 2097152, 00:03:42.544 "enable_recv_pipe": true, 00:03:42.544 "enable_quickack": false, 00:03:42.544 "enable_placement_id": 0, 00:03:42.544 "enable_zerocopy_send_server": true, 00:03:42.544 "enable_zerocopy_send_client": false, 00:03:42.544 "zerocopy_threshold": 0, 00:03:42.544 "tls_version": 0, 00:03:42.544 "enable_ktls": false 00:03:42.544 } 00:03:42.544 } 00:03:42.544 ] 00:03:42.544 }, 00:03:42.544 { 00:03:42.544 "subsystem": "vmd", 00:03:42.544 "config": [] 00:03:42.544 }, 00:03:42.544 { 00:03:42.544 "subsystem": "accel", 00:03:42.544 "config": [ 00:03:42.544 { 00:03:42.544 "method": "accel_set_options", 00:03:42.544 "params": { 00:03:42.544 "small_cache_size": 128, 00:03:42.544 "large_cache_size": 16, 00:03:42.544 "task_count": 2048, 00:03:42.544 "sequence_count": 2048, 00:03:42.544 "buf_count": 2048 00:03:42.544 } 00:03:42.544 } 00:03:42.544 ] 00:03:42.544 }, 00:03:42.544 { 00:03:42.544 "subsystem": "bdev", 00:03:42.544 "config": [ 00:03:42.544 { 00:03:42.544 "method": "bdev_set_options", 00:03:42.544 "params": { 00:03:42.544 "bdev_io_pool_size": 65535, 00:03:42.544 "bdev_io_cache_size": 256, 00:03:42.544 "bdev_auto_examine": true, 00:03:42.544 "iobuf_small_cache_size": 128, 00:03:42.544 "iobuf_large_cache_size": 16 00:03:42.544 } 00:03:42.544 }, 00:03:42.544 { 00:03:42.544 "method": "bdev_raid_set_options", 00:03:42.544 "params": { 00:03:42.544 "process_window_size_kb": 1024, 00:03:42.544 "process_max_bandwidth_mb_sec": 0 00:03:42.544 } 00:03:42.544 }, 00:03:42.544 { 00:03:42.544 "method": "bdev_iscsi_set_options", 00:03:42.544 "params": { 00:03:42.544 "timeout_sec": 30 00:03:42.544 } 00:03:42.544 }, 00:03:42.544 { 00:03:42.544 "method": "bdev_nvme_set_options", 00:03:42.544 "params": { 00:03:42.544 "action_on_timeout": "none", 00:03:42.544 "timeout_us": 0, 00:03:42.544 "timeout_admin_us": 0, 00:03:42.544 "keep_alive_timeout_ms": 10000, 00:03:42.544 "arbitration_burst": 0, 00:03:42.544 "low_priority_weight": 0, 00:03:42.544 "medium_priority_weight": 0, 00:03:42.544 "high_priority_weight": 0, 00:03:42.544 "nvme_adminq_poll_period_us": 10000, 00:03:42.544 "nvme_ioq_poll_period_us": 0, 00:03:42.544 "io_queue_requests": 0, 00:03:42.544 "delay_cmd_submit": true, 00:03:42.544 "transport_retry_count": 4, 00:03:42.544 "bdev_retry_count": 3, 00:03:42.544 "transport_ack_timeout": 0, 00:03:42.544 "ctrlr_loss_timeout_sec": 0, 00:03:42.544 "reconnect_delay_sec": 0, 00:03:42.544 "fast_io_fail_timeout_sec": 0, 00:03:42.544 "disable_auto_failback": false, 00:03:42.544 "generate_uuids": false, 00:03:42.544 "transport_tos": 0, 00:03:42.544 "nvme_error_stat": false, 00:03:42.544 "rdma_srq_size": 0, 00:03:42.544 "io_path_stat": false, 00:03:42.544 "allow_accel_sequence": false, 00:03:42.544 "rdma_max_cq_size": 0, 00:03:42.544 "rdma_cm_event_timeout_ms": 0, 00:03:42.544 "dhchap_digests": [ 00:03:42.544 "sha256", 00:03:42.544 "sha384", 00:03:42.544 "sha512" 00:03:42.544 ], 00:03:42.544 "dhchap_dhgroups": [ 00:03:42.544 "null", 00:03:42.544 "ffdhe2048", 00:03:42.544 "ffdhe3072", 00:03:42.544 "ffdhe4096", 00:03:42.544 "ffdhe6144", 00:03:42.544 "ffdhe8192" 00:03:42.544 ] 00:03:42.544 } 00:03:42.544 }, 00:03:42.544 { 00:03:42.544 "method": "bdev_nvme_set_hotplug", 00:03:42.544 "params": { 00:03:42.544 "period_us": 100000, 00:03:42.544 "enable": false 00:03:42.544 } 00:03:42.544 }, 00:03:42.544 { 00:03:42.544 "method": "bdev_wait_for_examine" 00:03:42.544 } 00:03:42.544 ] 00:03:42.544 }, 00:03:42.544 { 00:03:42.544 "subsystem": "scsi", 00:03:42.544 "config": null 00:03:42.544 }, 00:03:42.544 { 00:03:42.544 "subsystem": "scheduler", 00:03:42.544 "config": [ 00:03:42.544 { 00:03:42.544 "method": "framework_set_scheduler", 00:03:42.544 "params": { 00:03:42.544 "name": "static" 00:03:42.544 } 00:03:42.544 } 00:03:42.544 ] 00:03:42.545 }, 00:03:42.545 { 00:03:42.545 "subsystem": "vhost_scsi", 00:03:42.545 "config": [] 00:03:42.545 }, 00:03:42.545 { 00:03:42.545 "subsystem": "vhost_blk", 00:03:42.545 "config": [] 00:03:42.545 }, 00:03:42.545 { 00:03:42.545 "subsystem": "ublk", 00:03:42.545 "config": [] 00:03:42.545 }, 00:03:42.545 { 00:03:42.545 "subsystem": "nbd", 00:03:42.545 "config": [] 00:03:42.545 }, 00:03:42.545 { 00:03:42.545 "subsystem": "nvmf", 00:03:42.545 "config": [ 00:03:42.545 { 00:03:42.545 "method": "nvmf_set_config", 00:03:42.545 "params": { 00:03:42.545 "discovery_filter": "match_any", 00:03:42.545 "admin_cmd_passthru": { 00:03:42.545 "identify_ctrlr": false 00:03:42.545 }, 00:03:42.545 "dhchap_digests": [ 00:03:42.545 "sha256", 00:03:42.545 "sha384", 00:03:42.545 "sha512" 00:03:42.545 ], 00:03:42.545 "dhchap_dhgroups": [ 00:03:42.545 "null", 00:03:42.545 "ffdhe2048", 00:03:42.545 "ffdhe3072", 00:03:42.545 "ffdhe4096", 00:03:42.545 "ffdhe6144", 00:03:42.545 "ffdhe8192" 00:03:42.545 ] 00:03:42.545 } 00:03:42.545 }, 00:03:42.545 { 00:03:42.545 "method": "nvmf_set_max_subsystems", 00:03:42.545 "params": { 00:03:42.545 "max_subsystems": 1024 00:03:42.545 } 00:03:42.545 }, 00:03:42.545 { 00:03:42.545 "method": "nvmf_set_crdt", 00:03:42.545 "params": { 00:03:42.545 "crdt1": 0, 00:03:42.545 "crdt2": 0, 00:03:42.545 "crdt3": 0 00:03:42.545 } 00:03:42.545 }, 00:03:42.545 { 00:03:42.545 "method": "nvmf_create_transport", 00:03:42.545 "params": { 00:03:42.545 "trtype": "TCP", 00:03:42.545 "max_queue_depth": 128, 00:03:42.545 "max_io_qpairs_per_ctrlr": 127, 00:03:42.545 "in_capsule_data_size": 4096, 00:03:42.545 "max_io_size": 131072, 00:03:42.545 "io_unit_size": 131072, 00:03:42.545 "max_aq_depth": 128, 00:03:42.545 "num_shared_buffers": 511, 00:03:42.545 "buf_cache_size": 4294967295, 00:03:42.545 "dif_insert_or_strip": false, 00:03:42.545 "zcopy": false, 00:03:42.545 "c2h_success": true, 00:03:42.545 "sock_priority": 0, 00:03:42.545 "abort_timeout_sec": 1, 00:03:42.545 "ack_timeout": 0, 00:03:42.545 "data_wr_pool_size": 0 00:03:42.545 } 00:03:42.545 } 00:03:42.545 ] 00:03:42.545 }, 00:03:42.545 { 00:03:42.545 "subsystem": "iscsi", 00:03:42.545 "config": [ 00:03:42.545 { 00:03:42.545 "method": "iscsi_set_options", 00:03:42.545 "params": { 00:03:42.545 "node_base": "iqn.2016-06.io.spdk", 00:03:42.545 "max_sessions": 128, 00:03:42.545 "max_connections_per_session": 2, 00:03:42.545 "max_queue_depth": 64, 00:03:42.545 "default_time2wait": 2, 00:03:42.545 "default_time2retain": 20, 00:03:42.545 "first_burst_length": 8192, 00:03:42.545 "immediate_data": true, 00:03:42.545 "allow_duplicated_isid": false, 00:03:42.545 "error_recovery_level": 0, 00:03:42.545 "nop_timeout": 60, 00:03:42.545 "nop_in_interval": 30, 00:03:42.545 "disable_chap": false, 00:03:42.545 "require_chap": false, 00:03:42.545 "mutual_chap": false, 00:03:42.545 "chap_group": 0, 00:03:42.545 "max_large_datain_per_connection": 64, 00:03:42.545 "max_r2t_per_connection": 4, 00:03:42.545 "pdu_pool_size": 36864, 00:03:42.545 "immediate_data_pool_size": 16384, 00:03:42.545 "data_out_pool_size": 2048 00:03:42.545 } 00:03:42.545 } 00:03:42.545 ] 00:03:42.545 } 00:03:42.545 ] 00:03:42.545 } 00:03:42.545 13:50:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:42.545 13:50:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 784617 00:03:42.545 13:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 784617 ']' 00:03:42.545 13:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 784617 00:03:42.545 13:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:42.545 13:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:42.545 13:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784617 00:03:42.545 13:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:42.545 13:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:42.545 13:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784617' 00:03:42.545 killing process with pid 784617 00:03:42.545 13:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 784617 00:03:42.545 13:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 784617 00:03:42.805 13:50:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=784958 00:03:42.805 13:50:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:42.805 13:50:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:48.107 13:50:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 784958 00:03:48.107 13:50:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 784958 ']' 00:03:48.107 13:50:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 784958 00:03:48.107 13:50:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:48.107 13:50:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:48.107 13:50:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784958 00:03:48.107 13:50:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:48.107 13:50:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:48.107 13:50:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784958' 00:03:48.107 killing process with pid 784958 00:03:48.107 13:50:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 784958 00:03:48.107 13:50:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 784958 00:03:48.107 13:50:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:48.107 13:50:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:48.107 00:03:48.107 real 0m6.537s 00:03:48.107 user 0m6.447s 00:03:48.107 sys 0m0.546s 00:03:48.107 13:50:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.107 13:50:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:48.107 ************************************ 00:03:48.107 END TEST skip_rpc_with_json 00:03:48.107 ************************************ 00:03:48.107 13:50:46 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:48.107 13:50:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.107 13:50:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.107 13:50:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.107 ************************************ 00:03:48.107 START TEST skip_rpc_with_delay 00:03:48.107 ************************************ 00:03:48.107 13:50:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:48.107 13:50:46 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:48.107 13:50:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:48.107 13:50:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:48.107 13:50:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.107 13:50:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:48.107 13:50:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.107 13:50:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:48.107 13:50:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.107 13:50:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:48.107 13:50:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.107 13:50:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:48.107 13:50:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:48.107 [2024-10-30 13:50:46.275365] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:48.107 13:50:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:48.107 13:50:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:48.107 13:50:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:48.107 13:50:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:48.107 00:03:48.107 real 0m0.077s 00:03:48.107 user 0m0.053s 00:03:48.107 sys 0m0.024s 00:03:48.107 13:50:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.107 13:50:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:48.107 ************************************ 00:03:48.107 END TEST skip_rpc_with_delay 00:03:48.107 ************************************ 00:03:48.107 13:50:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:48.107 13:50:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:48.107 13:50:46 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:48.107 13:50:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.107 13:50:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.107 13:50:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.107 ************************************ 00:03:48.107 START TEST exit_on_failed_rpc_init 00:03:48.107 ************************************ 00:03:48.107 13:50:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:48.107 13:50:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=786018 00:03:48.107 13:50:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 786018 00:03:48.107 13:50:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:48.107 13:50:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 786018 ']' 00:03:48.107 13:50:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:48.107 13:50:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:48.107 13:50:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:48.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:48.107 13:50:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:48.107 13:50:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:48.367 [2024-10-30 13:50:46.427177] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:03:48.367 [2024-10-30 13:50:46.427228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786018 ] 00:03:48.367 [2024-10-30 13:50:46.512823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.367 [2024-10-30 13:50:46.543764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.935 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:48.935 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:48.935 13:50:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:48.935 13:50:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:48.935 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:48.935 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:48.935 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.935 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:48.935 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.935 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:48.935 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.935 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:48.935 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.935 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:48.935 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:49.194 [2024-10-30 13:50:47.290007] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:03:49.194 [2024-10-30 13:50:47.290058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786191 ] 00:03:49.194 [2024-10-30 13:50:47.378668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.194 [2024-10-30 13:50:47.414525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:49.194 [2024-10-30 13:50:47.414577] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:49.194 [2024-10-30 13:50:47.414587] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:49.194 [2024-10-30 13:50:47.414594] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:49.194 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:49.194 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:49.194 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:49.194 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:49.194 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:49.194 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:49.194 13:50:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:49.194 13:50:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 786018 00:03:49.194 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 786018 ']' 00:03:49.194 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 786018 00:03:49.194 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:49.194 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:49.194 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 786018 00:03:49.454 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:49.454 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:49.454 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 786018' 00:03:49.454 killing process with pid 786018 00:03:49.454 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 786018 00:03:49.454 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 786018 00:03:49.454 00:03:49.454 real 0m1.334s 00:03:49.454 user 0m1.557s 00:03:49.454 sys 0m0.390s 00:03:49.454 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.454 13:50:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:49.454 ************************************ 00:03:49.454 END TEST exit_on_failed_rpc_init 00:03:49.454 ************************************ 00:03:49.454 13:50:47 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:49.454 00:03:49.454 real 0m13.729s 00:03:49.454 user 0m13.312s 00:03:49.454 sys 0m1.565s 00:03:49.454 13:50:47 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.454 13:50:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.454 ************************************ 00:03:49.454 END TEST skip_rpc 00:03:49.454 ************************************ 00:03:49.714 13:50:47 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:49.714 13:50:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.714 13:50:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.714 13:50:47 -- common/autotest_common.sh@10 -- # set +x 00:03:49.714 ************************************ 00:03:49.714 START TEST rpc_client 00:03:49.714 ************************************ 00:03:49.714 13:50:47 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:49.714 * Looking for test storage... 00:03:49.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:49.714 13:50:47 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:49.714 13:50:47 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:03:49.714 13:50:47 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:49.714 13:50:47 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:49.714 13:50:47 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.714 13:50:48 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.714 13:50:48 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.714 13:50:48 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.715 13:50:48 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.715 13:50:48 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.715 13:50:48 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.715 13:50:48 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.715 13:50:48 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.715 13:50:48 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.715 13:50:48 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.715 13:50:48 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:49.715 13:50:48 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:49.715 13:50:48 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.715 13:50:48 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.715 13:50:48 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:49.715 13:50:48 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:49.715 13:50:48 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.715 13:50:48 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:49.715 13:50:48 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.715 13:50:48 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:49.715 13:50:48 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:49.715 13:50:48 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.715 13:50:48 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:49.975 13:50:48 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.975 13:50:48 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.975 13:50:48 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.975 13:50:48 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:49.975 13:50:48 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.975 13:50:48 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:49.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.975 --rc genhtml_branch_coverage=1 00:03:49.975 --rc genhtml_function_coverage=1 00:03:49.975 --rc genhtml_legend=1 00:03:49.975 --rc geninfo_all_blocks=1 00:03:49.975 --rc geninfo_unexecuted_blocks=1 00:03:49.975 00:03:49.975 ' 00:03:49.975 13:50:48 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:49.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.975 --rc genhtml_branch_coverage=1 00:03:49.975 --rc genhtml_function_coverage=1 00:03:49.975 --rc genhtml_legend=1 00:03:49.975 --rc geninfo_all_blocks=1 00:03:49.975 --rc geninfo_unexecuted_blocks=1 00:03:49.975 00:03:49.975 ' 00:03:49.975 13:50:48 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:49.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.975 --rc genhtml_branch_coverage=1 00:03:49.975 --rc genhtml_function_coverage=1 00:03:49.975 --rc genhtml_legend=1 00:03:49.975 --rc geninfo_all_blocks=1 00:03:49.975 --rc geninfo_unexecuted_blocks=1 00:03:49.975 00:03:49.975 ' 00:03:49.975 13:50:48 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:49.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.975 --rc genhtml_branch_coverage=1 00:03:49.975 --rc genhtml_function_coverage=1 00:03:49.975 --rc genhtml_legend=1 00:03:49.975 --rc geninfo_all_blocks=1 00:03:49.975 --rc geninfo_unexecuted_blocks=1 00:03:49.975 00:03:49.975 ' 00:03:49.975 13:50:48 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:49.975 OK 00:03:49.975 13:50:48 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:49.975 00:03:49.975 real 0m0.227s 00:03:49.975 user 0m0.123s 00:03:49.975 sys 0m0.115s 00:03:49.975 13:50:48 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.975 13:50:48 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:49.975 ************************************ 00:03:49.975 END TEST rpc_client 00:03:49.975 ************************************ 00:03:49.975 13:50:48 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:49.975 13:50:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.975 13:50:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.975 13:50:48 -- common/autotest_common.sh@10 -- # set +x 00:03:49.975 ************************************ 00:03:49.975 START TEST json_config 00:03:49.975 ************************************ 00:03:49.975 13:50:48 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:49.975 13:50:48 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:49.975 13:50:48 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:03:49.975 13:50:48 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:49.975 13:50:48 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:49.975 13:50:48 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.975 13:50:48 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.975 13:50:48 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.975 13:50:48 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.975 13:50:48 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.975 13:50:48 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.975 13:50:48 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.975 13:50:48 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.975 13:50:48 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.975 13:50:48 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.975 13:50:48 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.975 13:50:48 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:49.975 13:50:48 json_config -- scripts/common.sh@345 -- # : 1 00:03:49.975 13:50:48 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.975 13:50:48 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.975 13:50:48 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:49.975 13:50:48 json_config -- scripts/common.sh@353 -- # local d=1 00:03:49.975 13:50:48 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.975 13:50:48 json_config -- scripts/common.sh@355 -- # echo 1 00:03:50.238 13:50:48 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.238 13:50:48 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:50.238 13:50:48 json_config -- scripts/common.sh@353 -- # local d=2 00:03:50.238 13:50:48 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.238 13:50:48 json_config -- scripts/common.sh@355 -- # echo 2 00:03:50.238 13:50:48 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.238 13:50:48 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.238 13:50:48 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.238 13:50:48 json_config -- scripts/common.sh@368 -- # return 0 00:03:50.238 13:50:48 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.238 13:50:48 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:50.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.238 --rc genhtml_branch_coverage=1 00:03:50.238 --rc genhtml_function_coverage=1 00:03:50.238 --rc genhtml_legend=1 00:03:50.238 --rc geninfo_all_blocks=1 00:03:50.238 --rc geninfo_unexecuted_blocks=1 00:03:50.238 00:03:50.238 ' 00:03:50.238 13:50:48 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:50.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.238 --rc genhtml_branch_coverage=1 00:03:50.238 --rc genhtml_function_coverage=1 00:03:50.238 --rc genhtml_legend=1 00:03:50.238 --rc geninfo_all_blocks=1 00:03:50.238 --rc geninfo_unexecuted_blocks=1 00:03:50.238 00:03:50.238 ' 00:03:50.238 13:50:48 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:50.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.238 --rc genhtml_branch_coverage=1 00:03:50.238 --rc genhtml_function_coverage=1 00:03:50.238 --rc genhtml_legend=1 00:03:50.238 --rc geninfo_all_blocks=1 00:03:50.238 --rc geninfo_unexecuted_blocks=1 00:03:50.238 00:03:50.238 ' 00:03:50.238 13:50:48 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:50.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.238 --rc genhtml_branch_coverage=1 00:03:50.238 --rc genhtml_function_coverage=1 00:03:50.238 --rc genhtml_legend=1 00:03:50.238 --rc geninfo_all_blocks=1 00:03:50.238 --rc geninfo_unexecuted_blocks=1 00:03:50.238 00:03:50.238 ' 00:03:50.238 13:50:48 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:50.238 13:50:48 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:50.238 13:50:48 json_config -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:50.238 13:50:48 json_config -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:50.238 13:50:48 json_config -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:50.238 13:50:48 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.238 13:50:48 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.238 13:50:48 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.238 13:50:48 json_config -- paths/export.sh@5 -- # export PATH 00:03:50.238 13:50:48 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@51 -- # : 0 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:50.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:50.238 13:50:48 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:50.238 13:50:48 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:50.238 13:50:48 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:50.238 13:50:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:50.238 13:50:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:50.238 13:50:48 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:50.238 13:50:48 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:50.238 13:50:48 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:50.238 13:50:48 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:50.238 13:50:48 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:50.238 13:50:48 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:50.238 13:50:48 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:50.238 13:50:48 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:50.238 13:50:48 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:50.238 13:50:48 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:50.238 13:50:48 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:50.238 13:50:48 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:50.238 INFO: JSON configuration test init 00:03:50.238 13:50:48 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:50.238 13:50:48 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:50.238 13:50:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.238 13:50:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.238 13:50:48 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:50.238 13:50:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.238 13:50:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.238 13:50:48 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:50.238 13:50:48 json_config -- json_config/common.sh@9 -- # local app=target 00:03:50.238 13:50:48 json_config -- json_config/common.sh@10 -- # shift 00:03:50.238 13:50:48 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:50.238 13:50:48 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:50.238 13:50:48 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:50.238 13:50:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:50.238 13:50:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:50.239 13:50:48 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=786496 00:03:50.239 13:50:48 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:50.239 Waiting for target to run... 00:03:50.239 13:50:48 json_config -- json_config/common.sh@25 -- # waitforlisten 786496 /var/tmp/spdk_tgt.sock 00:03:50.239 13:50:48 json_config -- common/autotest_common.sh@835 -- # '[' -z 786496 ']' 00:03:50.239 13:50:48 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:50.239 13:50:48 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:50.239 13:50:48 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:50.239 13:50:48 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:50.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:50.239 13:50:48 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:50.239 13:50:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.239 [2024-10-30 13:50:48.398260] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:03:50.239 [2024-10-30 13:50:48.398312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786496 ] 00:03:50.500 [2024-10-30 13:50:48.705487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.500 [2024-10-30 13:50:48.730661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:51.073 13:50:49 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:51.073 13:50:49 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:51.073 13:50:49 json_config -- json_config/common.sh@26 -- # echo '' 00:03:51.073 00:03:51.073 13:50:49 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:51.073 13:50:49 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:51.073 13:50:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:51.073 13:50:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.073 13:50:49 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:51.073 13:50:49 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:51.073 13:50:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:51.073 13:50:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.073 13:50:49 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:51.073 13:50:49 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:51.073 13:50:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:51.643 13:50:49 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:51.643 13:50:49 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:51.643 13:50:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:51.643 13:50:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.643 13:50:49 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:51.643 13:50:49 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:51.643 13:50:49 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:51.643 13:50:49 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:51.643 13:50:49 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:51.643 13:50:49 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:51.643 13:50:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:51.643 13:50:49 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:51.903 13:50:49 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:51.903 13:50:49 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:51.903 13:50:49 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:51.903 13:50:49 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:51.903 13:50:49 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:51.904 13:50:49 json_config -- json_config/json_config.sh@54 -- # sort 00:03:51.904 13:50:49 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:51.904 13:50:49 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:51.904 13:50:49 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:51.904 13:50:49 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:51.904 13:50:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:51.904 13:50:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.904 13:50:49 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:51.904 13:50:49 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:51.904 13:50:49 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:51.904 13:50:49 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:51.904 13:50:49 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:51.904 13:50:49 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:51.904 13:50:49 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:51.904 13:50:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:51.904 13:50:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.904 13:50:50 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:51.904 13:50:50 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:51.904 13:50:50 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:51.904 13:50:50 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:51.904 13:50:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:51.904 MallocForNvmf0 00:03:51.904 13:50:50 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:51.904 13:50:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:52.164 MallocForNvmf1 00:03:52.164 13:50:50 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:52.164 13:50:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:52.425 [2024-10-30 13:50:50.531189] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:52.425 13:50:50 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:52.425 13:50:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:52.685 13:50:50 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:52.685 13:50:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:52.685 13:50:50 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:52.685 13:50:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:52.945 13:50:51 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:52.945 13:50:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:53.206 [2024-10-30 13:50:51.269395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:53.206 13:50:51 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:53.206 13:50:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:53.206 13:50:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.206 13:50:51 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:53.206 13:50:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:53.206 13:50:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.206 13:50:51 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:53.206 13:50:51 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:53.206 13:50:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:53.466 MallocBdevForConfigChangeCheck 00:03:53.466 13:50:51 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:53.466 13:50:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:53.466 13:50:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.466 13:50:51 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:53.466 13:50:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:53.726 13:50:51 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:53.726 INFO: shutting down applications... 00:03:53.726 13:50:51 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:53.726 13:50:51 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:53.726 13:50:51 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:53.726 13:50:51 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:54.298 Calling clear_iscsi_subsystem 00:03:54.298 Calling clear_nvmf_subsystem 00:03:54.298 Calling clear_nbd_subsystem 00:03:54.298 Calling clear_ublk_subsystem 00:03:54.298 Calling clear_vhost_blk_subsystem 00:03:54.298 Calling clear_vhost_scsi_subsystem 00:03:54.298 Calling clear_bdev_subsystem 00:03:54.298 13:50:52 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:54.298 13:50:52 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:54.298 13:50:52 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:54.298 13:50:52 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:54.298 13:50:52 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:54.298 13:50:52 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:54.558 13:50:52 json_config -- json_config/json_config.sh@352 -- # break 00:03:54.558 13:50:52 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:54.558 13:50:52 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:54.558 13:50:52 json_config -- json_config/common.sh@31 -- # local app=target 00:03:54.558 13:50:52 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:54.558 13:50:52 json_config -- json_config/common.sh@35 -- # [[ -n 786496 ]] 00:03:54.558 13:50:52 json_config -- json_config/common.sh@38 -- # kill -SIGINT 786496 00:03:54.558 13:50:52 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:54.558 13:50:52 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:54.558 13:50:52 json_config -- json_config/common.sh@41 -- # kill -0 786496 00:03:54.558 13:50:52 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:55.131 13:50:53 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:55.131 13:50:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:55.131 13:50:53 json_config -- json_config/common.sh@41 -- # kill -0 786496 00:03:55.131 13:50:53 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:55.131 13:50:53 json_config -- json_config/common.sh@43 -- # break 00:03:55.131 13:50:53 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:55.131 13:50:53 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:55.131 SPDK target shutdown done 00:03:55.131 13:50:53 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:55.131 INFO: relaunching applications... 00:03:55.131 13:50:53 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:55.131 13:50:53 json_config -- json_config/common.sh@9 -- # local app=target 00:03:55.131 13:50:53 json_config -- json_config/common.sh@10 -- # shift 00:03:55.131 13:50:53 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:55.131 13:50:53 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:55.131 13:50:53 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:55.131 13:50:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:55.131 13:50:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:55.131 13:50:53 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=787633 00:03:55.131 13:50:53 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:55.131 Waiting for target to run... 00:03:55.131 13:50:53 json_config -- json_config/common.sh@25 -- # waitforlisten 787633 /var/tmp/spdk_tgt.sock 00:03:55.131 13:50:53 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:55.131 13:50:53 json_config -- common/autotest_common.sh@835 -- # '[' -z 787633 ']' 00:03:55.131 13:50:53 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:55.131 13:50:53 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:55.131 13:50:53 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:55.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:55.131 13:50:53 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:55.131 13:50:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.131 [2024-10-30 13:50:53.259430] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:03:55.131 [2024-10-30 13:50:53.259490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787633 ] 00:03:55.392 [2024-10-30 13:50:53.569326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.392 [2024-10-30 13:50:53.594126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.965 [2024-10-30 13:50:54.093854] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:55.965 [2024-10-30 13:50:54.126226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:55.965 13:50:54 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:55.965 13:50:54 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:55.965 13:50:54 json_config -- json_config/common.sh@26 -- # echo '' 00:03:55.965 00:03:55.965 13:50:54 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:03:55.965 13:50:54 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:55.965 INFO: Checking if target configuration is the same... 00:03:55.965 13:50:54 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:55.965 13:50:54 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:03:55.965 13:50:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:55.965 + '[' 2 -ne 2 ']' 00:03:55.965 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:55.965 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:55.965 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:55.965 +++ basename /dev/fd/62 00:03:55.965 ++ mktemp /tmp/62.XXX 00:03:55.965 + tmp_file_1=/tmp/62.vij 00:03:55.965 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:55.965 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:55.965 + tmp_file_2=/tmp/spdk_tgt_config.json.dDx 00:03:55.965 + ret=0 00:03:55.965 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:56.227 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:56.488 + diff -u /tmp/62.vij /tmp/spdk_tgt_config.json.dDx 00:03:56.488 + echo 'INFO: JSON config files are the same' 00:03:56.488 INFO: JSON config files are the same 00:03:56.488 + rm /tmp/62.vij /tmp/spdk_tgt_config.json.dDx 00:03:56.488 + exit 0 00:03:56.488 13:50:54 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:03:56.488 13:50:54 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:56.488 INFO: changing configuration and checking if this can be detected... 00:03:56.488 13:50:54 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:56.488 13:50:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:56.488 13:50:54 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:56.488 13:50:54 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:03:56.488 13:50:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:56.488 + '[' 2 -ne 2 ']' 00:03:56.489 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:56.489 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:56.489 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:56.489 +++ basename /dev/fd/62 00:03:56.489 ++ mktemp /tmp/62.XXX 00:03:56.489 + tmp_file_1=/tmp/62.F5i 00:03:56.489 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:56.489 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:56.489 + tmp_file_2=/tmp/spdk_tgt_config.json.JsU 00:03:56.489 + ret=0 00:03:56.489 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:57.060 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:57.060 + diff -u /tmp/62.F5i /tmp/spdk_tgt_config.json.JsU 00:03:57.060 + ret=1 00:03:57.060 + echo '=== Start of file: /tmp/62.F5i ===' 00:03:57.060 + cat /tmp/62.F5i 00:03:57.060 + echo '=== End of file: /tmp/62.F5i ===' 00:03:57.060 + echo '' 00:03:57.060 + echo '=== Start of file: /tmp/spdk_tgt_config.json.JsU ===' 00:03:57.060 + cat /tmp/spdk_tgt_config.json.JsU 00:03:57.060 + echo '=== End of file: /tmp/spdk_tgt_config.json.JsU ===' 00:03:57.060 + echo '' 00:03:57.060 + rm /tmp/62.F5i /tmp/spdk_tgt_config.json.JsU 00:03:57.060 + exit 1 00:03:57.060 13:50:55 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:03:57.060 INFO: configuration change detected. 00:03:57.060 13:50:55 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:03:57.060 13:50:55 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:03:57.060 13:50:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:57.060 13:50:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.060 13:50:55 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:03:57.060 13:50:55 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:03:57.060 13:50:55 json_config -- json_config/json_config.sh@324 -- # [[ -n 787633 ]] 00:03:57.060 13:50:55 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:03:57.060 13:50:55 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:03:57.060 13:50:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:57.060 13:50:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.060 13:50:55 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:03:57.060 13:50:55 json_config -- json_config/json_config.sh@200 -- # uname -s 00:03:57.060 13:50:55 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:03:57.060 13:50:55 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:03:57.060 13:50:55 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:03:57.060 13:50:55 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:03:57.060 13:50:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:57.060 13:50:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.060 13:50:55 json_config -- json_config/json_config.sh@330 -- # killprocess 787633 00:03:57.060 13:50:55 json_config -- common/autotest_common.sh@954 -- # '[' -z 787633 ']' 00:03:57.060 13:50:55 json_config -- common/autotest_common.sh@958 -- # kill -0 787633 00:03:57.060 13:50:55 json_config -- common/autotest_common.sh@959 -- # uname 00:03:57.061 13:50:55 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:57.061 13:50:55 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 787633 00:03:57.061 13:50:55 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:57.061 13:50:55 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:57.061 13:50:55 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 787633' 00:03:57.061 killing process with pid 787633 00:03:57.061 13:50:55 json_config -- common/autotest_common.sh@973 -- # kill 787633 00:03:57.061 13:50:55 json_config -- common/autotest_common.sh@978 -- # wait 787633 00:03:57.321 13:50:55 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:57.321 13:50:55 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:03:57.321 13:50:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:57.321 13:50:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.321 13:50:55 json_config -- json_config/json_config.sh@335 -- # return 0 00:03:57.321 13:50:55 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:03:57.321 INFO: Success 00:03:57.321 00:03:57.321 real 0m7.443s 00:03:57.321 user 0m9.046s 00:03:57.321 sys 0m1.965s 00:03:57.321 13:50:55 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.321 13:50:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.321 ************************************ 00:03:57.321 END TEST json_config 00:03:57.321 ************************************ 00:03:57.321 13:50:55 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:57.321 13:50:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.321 13:50:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.321 13:50:55 -- common/autotest_common.sh@10 -- # set +x 00:03:57.583 ************************************ 00:03:57.583 START TEST json_config_extra_key 00:03:57.583 ************************************ 00:03:57.583 13:50:55 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:57.583 13:50:55 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:57.583 13:50:55 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:03:57.583 13:50:55 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:57.583 13:50:55 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.583 13:50:55 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:03:57.583 13:50:55 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.583 13:50:55 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:57.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.583 --rc genhtml_branch_coverage=1 00:03:57.583 --rc genhtml_function_coverage=1 00:03:57.583 --rc genhtml_legend=1 00:03:57.583 --rc geninfo_all_blocks=1 00:03:57.583 --rc geninfo_unexecuted_blocks=1 00:03:57.583 00:03:57.583 ' 00:03:57.583 13:50:55 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:57.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.583 --rc genhtml_branch_coverage=1 00:03:57.583 --rc genhtml_function_coverage=1 00:03:57.583 --rc genhtml_legend=1 00:03:57.583 --rc geninfo_all_blocks=1 00:03:57.583 --rc geninfo_unexecuted_blocks=1 00:03:57.583 00:03:57.583 ' 00:03:57.583 13:50:55 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:57.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.583 --rc genhtml_branch_coverage=1 00:03:57.583 --rc genhtml_function_coverage=1 00:03:57.583 --rc genhtml_legend=1 00:03:57.583 --rc geninfo_all_blocks=1 00:03:57.584 --rc geninfo_unexecuted_blocks=1 00:03:57.584 00:03:57.584 ' 00:03:57.584 13:50:55 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:57.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.584 --rc genhtml_branch_coverage=1 00:03:57.584 --rc genhtml_function_coverage=1 00:03:57.584 --rc genhtml_legend=1 00:03:57.584 --rc geninfo_all_blocks=1 00:03:57.584 --rc geninfo_unexecuted_blocks=1 00:03:57.584 00:03:57.584 ' 00:03:57.584 13:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:57.584 13:50:55 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:03:57.584 13:50:55 json_config_extra_key -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:57.584 13:50:55 json_config_extra_key -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:57.584 13:50:55 json_config_extra_key -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:57.584 13:50:55 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.584 13:50:55 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.584 13:50:55 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.584 13:50:55 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:57.584 13:50:55 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:57.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:57.584 13:50:55 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:57.584 13:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:57.584 13:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:57.584 13:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:57.584 13:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:57.584 13:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:57.584 13:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:57.584 13:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:57.584 13:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:57.584 13:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:57.584 13:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:57.584 13:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:57.584 INFO: launching applications... 00:03:57.584 13:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:57.584 13:50:55 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:57.584 13:50:55 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:57.584 13:50:55 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:57.584 13:50:55 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:57.584 13:50:55 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:57.584 13:50:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:57.584 13:50:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:57.584 13:50:55 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=788365 00:03:57.584 13:50:55 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:57.584 Waiting for target to run... 00:03:57.584 13:50:55 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 788365 /var/tmp/spdk_tgt.sock 00:03:57.584 13:50:55 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 788365 ']' 00:03:57.584 13:50:55 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:57.584 13:50:55 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:57.584 13:50:55 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:57.584 13:50:55 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:57.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:57.584 13:50:55 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:57.584 13:50:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:57.847 [2024-10-30 13:50:55.907999] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:03:57.847 [2024-10-30 13:50:55.908077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid788365 ] 00:03:58.108 [2024-10-30 13:50:56.241537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.108 [2024-10-30 13:50:56.270993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.678 13:50:56 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:58.678 13:50:56 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:03:58.678 13:50:56 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:58.678 00:03:58.678 13:50:56 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:58.678 INFO: shutting down applications... 00:03:58.678 13:50:56 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:58.678 13:50:56 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:58.678 13:50:56 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:58.678 13:50:56 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 788365 ]] 00:03:58.678 13:50:56 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 788365 00:03:58.678 13:50:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:58.678 13:50:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:58.678 13:50:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 788365 00:03:58.678 13:50:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:58.971 13:50:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:58.971 13:50:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:58.971 13:50:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 788365 00:03:58.971 13:50:57 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:58.971 13:50:57 json_config_extra_key -- json_config/common.sh@43 -- # break 00:03:58.971 13:50:57 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:58.971 13:50:57 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:58.971 SPDK target shutdown done 00:03:58.971 13:50:57 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:03:58.971 Success 00:03:58.971 00:03:58.971 real 0m1.584s 00:03:58.971 user 0m1.186s 00:03:58.971 sys 0m0.439s 00:03:58.971 13:50:57 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.971 13:50:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:58.971 ************************************ 00:03:58.971 END TEST json_config_extra_key 00:03:58.971 ************************************ 00:03:58.971 13:50:57 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:58.971 13:50:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.971 13:50:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.971 13:50:57 -- common/autotest_common.sh@10 -- # set +x 00:03:59.232 ************************************ 00:03:59.232 START TEST alias_rpc 00:03:59.232 ************************************ 00:03:59.232 13:50:57 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:59.232 * Looking for test storage... 00:03:59.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:03:59.232 13:50:57 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:59.232 13:50:57 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:59.232 13:50:57 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:59.232 13:50:57 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@345 -- # : 1 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:59.232 13:50:57 alias_rpc -- scripts/common.sh@368 -- # return 0 00:03:59.232 13:50:57 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.232 13:50:57 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:59.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.232 --rc genhtml_branch_coverage=1 00:03:59.232 --rc genhtml_function_coverage=1 00:03:59.232 --rc genhtml_legend=1 00:03:59.232 --rc geninfo_all_blocks=1 00:03:59.232 --rc geninfo_unexecuted_blocks=1 00:03:59.232 00:03:59.232 ' 00:03:59.232 13:50:57 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:59.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.232 --rc genhtml_branch_coverage=1 00:03:59.232 --rc genhtml_function_coverage=1 00:03:59.232 --rc genhtml_legend=1 00:03:59.232 --rc geninfo_all_blocks=1 00:03:59.232 --rc geninfo_unexecuted_blocks=1 00:03:59.232 00:03:59.232 ' 00:03:59.232 13:50:57 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:59.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.232 --rc genhtml_branch_coverage=1 00:03:59.232 --rc genhtml_function_coverage=1 00:03:59.232 --rc genhtml_legend=1 00:03:59.232 --rc geninfo_all_blocks=1 00:03:59.232 --rc geninfo_unexecuted_blocks=1 00:03:59.232 00:03:59.232 ' 00:03:59.232 13:50:57 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:59.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.232 --rc genhtml_branch_coverage=1 00:03:59.232 --rc genhtml_function_coverage=1 00:03:59.232 --rc genhtml_legend=1 00:03:59.232 --rc geninfo_all_blocks=1 00:03:59.232 --rc geninfo_unexecuted_blocks=1 00:03:59.232 00:03:59.232 ' 00:03:59.232 13:50:57 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:59.232 13:50:57 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=788741 00:03:59.232 13:50:57 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 788741 00:03:59.232 13:50:57 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:59.232 13:50:57 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 788741 ']' 00:03:59.232 13:50:57 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.232 13:50:57 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:59.232 13:50:57 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.232 13:50:57 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:59.232 13:50:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.493 [2024-10-30 13:50:57.562091] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:03:59.493 [2024-10-30 13:50:57.562163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid788741 ] 00:03:59.493 [2024-10-30 13:50:57.649999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.493 [2024-10-30 13:50:57.684361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.065 13:50:58 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:00.065 13:50:58 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:00.065 13:50:58 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:00.326 13:50:58 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 788741 00:04:00.326 13:50:58 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 788741 ']' 00:04:00.326 13:50:58 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 788741 00:04:00.326 13:50:58 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:00.326 13:50:58 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:00.326 13:50:58 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 788741 00:04:00.326 13:50:58 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:00.326 13:50:58 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:00.326 13:50:58 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 788741' 00:04:00.326 killing process with pid 788741 00:04:00.326 13:50:58 alias_rpc -- common/autotest_common.sh@973 -- # kill 788741 00:04:00.326 13:50:58 alias_rpc -- common/autotest_common.sh@978 -- # wait 788741 00:04:00.587 00:04:00.587 real 0m1.493s 00:04:00.587 user 0m1.636s 00:04:00.587 sys 0m0.419s 00:04:00.587 13:50:58 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.587 13:50:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.587 ************************************ 00:04:00.587 END TEST alias_rpc 00:04:00.587 ************************************ 00:04:00.587 13:50:58 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:00.587 13:50:58 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:00.587 13:50:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.587 13:50:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.587 13:50:58 -- common/autotest_common.sh@10 -- # set +x 00:04:00.587 ************************************ 00:04:00.587 START TEST spdkcli_tcp 00:04:00.587 ************************************ 00:04:00.587 13:50:58 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:00.848 * Looking for test storage... 00:04:00.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:00.848 13:50:58 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:00.848 13:50:58 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:00.848 13:50:58 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:00.848 13:50:59 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:00.848 13:50:59 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:00.848 13:50:59 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.848 13:50:59 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:00.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.848 --rc genhtml_branch_coverage=1 00:04:00.848 --rc genhtml_function_coverage=1 00:04:00.848 --rc genhtml_legend=1 00:04:00.849 --rc geninfo_all_blocks=1 00:04:00.849 --rc geninfo_unexecuted_blocks=1 00:04:00.849 00:04:00.849 ' 00:04:00.849 13:50:59 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:00.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.849 --rc genhtml_branch_coverage=1 00:04:00.849 --rc genhtml_function_coverage=1 00:04:00.849 --rc genhtml_legend=1 00:04:00.849 --rc geninfo_all_blocks=1 00:04:00.849 --rc geninfo_unexecuted_blocks=1 00:04:00.849 00:04:00.849 ' 00:04:00.849 13:50:59 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:00.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.849 --rc genhtml_branch_coverage=1 00:04:00.849 --rc genhtml_function_coverage=1 00:04:00.849 --rc genhtml_legend=1 00:04:00.849 --rc geninfo_all_blocks=1 00:04:00.849 --rc geninfo_unexecuted_blocks=1 00:04:00.849 00:04:00.849 ' 00:04:00.849 13:50:59 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:00.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.849 --rc genhtml_branch_coverage=1 00:04:00.849 --rc genhtml_function_coverage=1 00:04:00.849 --rc genhtml_legend=1 00:04:00.849 --rc geninfo_all_blocks=1 00:04:00.849 --rc geninfo_unexecuted_blocks=1 00:04:00.849 00:04:00.849 ' 00:04:00.849 13:50:59 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:00.849 13:50:59 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:00.849 13:50:59 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:00.849 13:50:59 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:00.849 13:50:59 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:00.849 13:50:59 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:00.849 13:50:59 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:00.849 13:50:59 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.849 13:50:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:00.849 13:50:59 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=789071 00:04:00.849 13:50:59 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 789071 00:04:00.849 13:50:59 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:00.849 13:50:59 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 789071 ']' 00:04:00.849 13:50:59 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.849 13:50:59 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:00.849 13:50:59 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.849 13:50:59 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:00.849 13:50:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:01.109 [2024-10-30 13:50:59.149848] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:04:01.109 [2024-10-30 13:50:59.149923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid789071 ] 00:04:01.109 [2024-10-30 13:50:59.235995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:01.109 [2024-10-30 13:50:59.272586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:01.109 [2024-10-30 13:50:59.272587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.678 13:50:59 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:01.678 13:50:59 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:01.678 13:50:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:01.678 13:50:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=789226 00:04:01.678 13:50:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:01.937 [ 00:04:01.937 "bdev_malloc_delete", 00:04:01.937 "bdev_malloc_create", 00:04:01.937 "bdev_null_resize", 00:04:01.937 "bdev_null_delete", 00:04:01.937 "bdev_null_create", 00:04:01.937 "bdev_nvme_cuse_unregister", 00:04:01.937 "bdev_nvme_cuse_register", 00:04:01.937 "bdev_opal_new_user", 00:04:01.938 "bdev_opal_set_lock_state", 00:04:01.938 "bdev_opal_delete", 00:04:01.938 "bdev_opal_get_info", 00:04:01.938 "bdev_opal_create", 00:04:01.938 "bdev_nvme_opal_revert", 00:04:01.938 "bdev_nvme_opal_init", 00:04:01.938 "bdev_nvme_send_cmd", 00:04:01.938 "bdev_nvme_set_keys", 00:04:01.938 "bdev_nvme_get_path_iostat", 00:04:01.938 "bdev_nvme_get_mdns_discovery_info", 00:04:01.938 "bdev_nvme_stop_mdns_discovery", 00:04:01.938 "bdev_nvme_start_mdns_discovery", 00:04:01.938 "bdev_nvme_set_multipath_policy", 00:04:01.938 "bdev_nvme_set_preferred_path", 00:04:01.938 "bdev_nvme_get_io_paths", 00:04:01.938 "bdev_nvme_remove_error_injection", 00:04:01.938 "bdev_nvme_add_error_injection", 00:04:01.938 "bdev_nvme_get_discovery_info", 00:04:01.938 "bdev_nvme_stop_discovery", 00:04:01.938 "bdev_nvme_start_discovery", 00:04:01.938 "bdev_nvme_get_controller_health_info", 00:04:01.938 "bdev_nvme_disable_controller", 00:04:01.938 "bdev_nvme_enable_controller", 00:04:01.938 "bdev_nvme_reset_controller", 00:04:01.938 "bdev_nvme_get_transport_statistics", 00:04:01.938 "bdev_nvme_apply_firmware", 00:04:01.938 "bdev_nvme_detach_controller", 00:04:01.938 "bdev_nvme_get_controllers", 00:04:01.938 "bdev_nvme_attach_controller", 00:04:01.938 "bdev_nvme_set_hotplug", 00:04:01.938 "bdev_nvme_set_options", 00:04:01.938 "bdev_passthru_delete", 00:04:01.938 "bdev_passthru_create", 00:04:01.938 "bdev_lvol_set_parent_bdev", 00:04:01.938 "bdev_lvol_set_parent", 00:04:01.938 "bdev_lvol_check_shallow_copy", 00:04:01.938 "bdev_lvol_start_shallow_copy", 00:04:01.938 "bdev_lvol_grow_lvstore", 00:04:01.938 "bdev_lvol_get_lvols", 00:04:01.938 "bdev_lvol_get_lvstores", 00:04:01.938 "bdev_lvol_delete", 00:04:01.938 "bdev_lvol_set_read_only", 00:04:01.938 "bdev_lvol_resize", 00:04:01.938 "bdev_lvol_decouple_parent", 00:04:01.938 "bdev_lvol_inflate", 00:04:01.938 "bdev_lvol_rename", 00:04:01.938 "bdev_lvol_clone_bdev", 00:04:01.938 "bdev_lvol_clone", 00:04:01.938 "bdev_lvol_snapshot", 00:04:01.938 "bdev_lvol_create", 00:04:01.938 "bdev_lvol_delete_lvstore", 00:04:01.938 "bdev_lvol_rename_lvstore", 00:04:01.938 "bdev_lvol_create_lvstore", 00:04:01.938 "bdev_raid_set_options", 00:04:01.938 "bdev_raid_remove_base_bdev", 00:04:01.938 "bdev_raid_add_base_bdev", 00:04:01.938 "bdev_raid_delete", 00:04:01.938 "bdev_raid_create", 00:04:01.938 "bdev_raid_get_bdevs", 00:04:01.938 "bdev_error_inject_error", 00:04:01.938 "bdev_error_delete", 00:04:01.938 "bdev_error_create", 00:04:01.938 "bdev_split_delete", 00:04:01.938 "bdev_split_create", 00:04:01.938 "bdev_delay_delete", 00:04:01.938 "bdev_delay_create", 00:04:01.938 "bdev_delay_update_latency", 00:04:01.938 "bdev_zone_block_delete", 00:04:01.938 "bdev_zone_block_create", 00:04:01.938 "blobfs_create", 00:04:01.938 "blobfs_detect", 00:04:01.938 "blobfs_set_cache_size", 00:04:01.938 "bdev_aio_delete", 00:04:01.938 "bdev_aio_rescan", 00:04:01.938 "bdev_aio_create", 00:04:01.938 "bdev_ftl_set_property", 00:04:01.938 "bdev_ftl_get_properties", 00:04:01.938 "bdev_ftl_get_stats", 00:04:01.938 "bdev_ftl_unmap", 00:04:01.938 "bdev_ftl_unload", 00:04:01.938 "bdev_ftl_delete", 00:04:01.938 "bdev_ftl_load", 00:04:01.938 "bdev_ftl_create", 00:04:01.938 "bdev_virtio_attach_controller", 00:04:01.938 "bdev_virtio_scsi_get_devices", 00:04:01.938 "bdev_virtio_detach_controller", 00:04:01.938 "bdev_virtio_blk_set_hotplug", 00:04:01.938 "bdev_iscsi_delete", 00:04:01.938 "bdev_iscsi_create", 00:04:01.938 "bdev_iscsi_set_options", 00:04:01.938 "accel_error_inject_error", 00:04:01.938 "ioat_scan_accel_module", 00:04:01.938 "ae4dma_scan_accel_module", 00:04:01.938 "dsa_scan_accel_module", 00:04:01.938 "iaa_scan_accel_module", 00:04:01.938 "vfu_virtio_create_fs_endpoint", 00:04:01.938 "vfu_virtio_create_scsi_endpoint", 00:04:01.938 "vfu_virtio_scsi_remove_target", 00:04:01.938 "vfu_virtio_scsi_add_target", 00:04:01.938 "vfu_virtio_create_blk_endpoint", 00:04:01.938 "vfu_virtio_delete_endpoint", 00:04:01.938 "keyring_file_remove_key", 00:04:01.938 "keyring_file_add_key", 00:04:01.938 "keyring_linux_set_options", 00:04:01.938 "fsdev_aio_delete", 00:04:01.938 "fsdev_aio_create", 00:04:01.938 "iscsi_get_histogram", 00:04:01.938 "iscsi_enable_histogram", 00:04:01.938 "iscsi_set_options", 00:04:01.938 "iscsi_get_auth_groups", 00:04:01.938 "iscsi_auth_group_remove_secret", 00:04:01.938 "iscsi_auth_group_add_secret", 00:04:01.938 "iscsi_delete_auth_group", 00:04:01.938 "iscsi_create_auth_group", 00:04:01.938 "iscsi_set_discovery_auth", 00:04:01.938 "iscsi_get_options", 00:04:01.938 "iscsi_target_node_request_logout", 00:04:01.938 "iscsi_target_node_set_redirect", 00:04:01.938 "iscsi_target_node_set_auth", 00:04:01.938 "iscsi_target_node_add_lun", 00:04:01.938 "iscsi_get_stats", 00:04:01.938 "iscsi_get_connections", 00:04:01.938 "iscsi_portal_group_set_auth", 00:04:01.938 "iscsi_start_portal_group", 00:04:01.938 "iscsi_delete_portal_group", 00:04:01.938 "iscsi_create_portal_group", 00:04:01.938 "iscsi_get_portal_groups", 00:04:01.938 "iscsi_delete_target_node", 00:04:01.938 "iscsi_target_node_remove_pg_ig_maps", 00:04:01.938 "iscsi_target_node_add_pg_ig_maps", 00:04:01.938 "iscsi_create_target_node", 00:04:01.938 "iscsi_get_target_nodes", 00:04:01.938 "iscsi_delete_initiator_group", 00:04:01.938 "iscsi_initiator_group_remove_initiators", 00:04:01.938 "iscsi_initiator_group_add_initiators", 00:04:01.938 "iscsi_create_initiator_group", 00:04:01.938 "iscsi_get_initiator_groups", 00:04:01.938 "nvmf_set_crdt", 00:04:01.938 "nvmf_set_config", 00:04:01.938 "nvmf_set_max_subsystems", 00:04:01.938 "nvmf_stop_mdns_prr", 00:04:01.938 "nvmf_publish_mdns_prr", 00:04:01.938 "nvmf_subsystem_get_listeners", 00:04:01.938 "nvmf_subsystem_get_qpairs", 00:04:01.938 "nvmf_subsystem_get_controllers", 00:04:01.938 "nvmf_get_stats", 00:04:01.938 "nvmf_get_transports", 00:04:01.938 "nvmf_create_transport", 00:04:01.938 "nvmf_get_targets", 00:04:01.938 "nvmf_delete_target", 00:04:01.938 "nvmf_create_target", 00:04:01.938 "nvmf_subsystem_allow_any_host", 00:04:01.938 "nvmf_subsystem_set_keys", 00:04:01.938 "nvmf_subsystem_remove_host", 00:04:01.938 "nvmf_subsystem_add_host", 00:04:01.938 "nvmf_ns_remove_host", 00:04:01.938 "nvmf_ns_add_host", 00:04:01.938 "nvmf_subsystem_remove_ns", 00:04:01.938 "nvmf_subsystem_set_ns_ana_group", 00:04:01.938 "nvmf_subsystem_add_ns", 00:04:01.938 "nvmf_subsystem_listener_set_ana_state", 00:04:01.938 "nvmf_discovery_get_referrals", 00:04:01.938 "nvmf_discovery_remove_referral", 00:04:01.938 "nvmf_discovery_add_referral", 00:04:01.938 "nvmf_subsystem_remove_listener", 00:04:01.938 "nvmf_subsystem_add_listener", 00:04:01.938 "nvmf_delete_subsystem", 00:04:01.938 "nvmf_create_subsystem", 00:04:01.938 "nvmf_get_subsystems", 00:04:01.938 "env_dpdk_get_mem_stats", 00:04:01.938 "nbd_get_disks", 00:04:01.938 "nbd_stop_disk", 00:04:01.938 "nbd_start_disk", 00:04:01.938 "ublk_recover_disk", 00:04:01.938 "ublk_get_disks", 00:04:01.938 "ublk_stop_disk", 00:04:01.938 "ublk_start_disk", 00:04:01.938 "ublk_destroy_target", 00:04:01.938 "ublk_create_target", 00:04:01.938 "virtio_blk_create_transport", 00:04:01.938 "virtio_blk_get_transports", 00:04:01.938 "vhost_controller_set_coalescing", 00:04:01.938 "vhost_get_controllers", 00:04:01.938 "vhost_delete_controller", 00:04:01.938 "vhost_create_blk_controller", 00:04:01.938 "vhost_scsi_controller_remove_target", 00:04:01.938 "vhost_scsi_controller_add_target", 00:04:01.938 "vhost_start_scsi_controller", 00:04:01.938 "vhost_create_scsi_controller", 00:04:01.938 "thread_set_cpumask", 00:04:01.938 "scheduler_set_options", 00:04:01.938 "framework_get_governor", 00:04:01.938 "framework_get_scheduler", 00:04:01.938 "framework_set_scheduler", 00:04:01.938 "framework_get_reactors", 00:04:01.938 "thread_get_io_channels", 00:04:01.938 "thread_get_pollers", 00:04:01.938 "thread_get_stats", 00:04:01.938 "framework_monitor_context_switch", 00:04:01.938 "spdk_kill_instance", 00:04:01.938 "log_enable_timestamps", 00:04:01.938 "log_get_flags", 00:04:01.938 "log_clear_flag", 00:04:01.938 "log_set_flag", 00:04:01.938 "log_get_level", 00:04:01.938 "log_set_level", 00:04:01.938 "log_get_print_level", 00:04:01.938 "log_set_print_level", 00:04:01.938 "framework_enable_cpumask_locks", 00:04:01.938 "framework_disable_cpumask_locks", 00:04:01.938 "framework_wait_init", 00:04:01.938 "framework_start_init", 00:04:01.938 "scsi_get_devices", 00:04:01.938 "bdev_get_histogram", 00:04:01.938 "bdev_enable_histogram", 00:04:01.938 "bdev_set_qos_limit", 00:04:01.938 "bdev_set_qd_sampling_period", 00:04:01.938 "bdev_get_bdevs", 00:04:01.938 "bdev_reset_iostat", 00:04:01.938 "bdev_get_iostat", 00:04:01.938 "bdev_examine", 00:04:01.938 "bdev_wait_for_examine", 00:04:01.938 "bdev_set_options", 00:04:01.938 "accel_get_stats", 00:04:01.938 "accel_set_options", 00:04:01.938 "accel_set_driver", 00:04:01.938 "accel_crypto_key_destroy", 00:04:01.938 "accel_crypto_keys_get", 00:04:01.938 "accel_crypto_key_create", 00:04:01.938 "accel_assign_opc", 00:04:01.938 "accel_get_module_info", 00:04:01.938 "accel_get_opc_assignments", 00:04:01.938 "vmd_rescan", 00:04:01.938 "vmd_remove_device", 00:04:01.938 "vmd_enable", 00:04:01.938 "sock_get_default_impl", 00:04:01.938 "sock_set_default_impl", 00:04:01.938 "sock_impl_set_options", 00:04:01.938 "sock_impl_get_options", 00:04:01.938 "iobuf_get_stats", 00:04:01.938 "iobuf_set_options", 00:04:01.938 "keyring_get_keys", 00:04:01.938 "vfu_tgt_set_base_path", 00:04:01.938 "framework_get_pci_devices", 00:04:01.938 "framework_get_config", 00:04:01.938 "framework_get_subsystems", 00:04:01.938 "fsdev_set_opts", 00:04:01.938 "fsdev_get_opts", 00:04:01.938 "trace_get_info", 00:04:01.938 "trace_get_tpoint_group_mask", 00:04:01.938 "trace_disable_tpoint_group", 00:04:01.939 "trace_enable_tpoint_group", 00:04:01.939 "trace_clear_tpoint_mask", 00:04:01.939 "trace_set_tpoint_mask", 00:04:01.939 "notify_get_notifications", 00:04:01.939 "notify_get_types", 00:04:01.939 "spdk_get_version", 00:04:01.939 "rpc_get_methods" 00:04:01.939 ] 00:04:01.939 13:51:00 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:01.939 13:51:00 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:01.939 13:51:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:01.939 13:51:00 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:01.939 13:51:00 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 789071 00:04:01.939 13:51:00 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 789071 ']' 00:04:01.939 13:51:00 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 789071 00:04:01.939 13:51:00 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:01.939 13:51:00 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:01.939 13:51:00 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 789071 00:04:01.939 13:51:00 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:01.939 13:51:00 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:01.939 13:51:00 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 789071' 00:04:01.939 killing process with pid 789071 00:04:01.939 13:51:00 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 789071 00:04:01.939 13:51:00 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 789071 00:04:02.199 00:04:02.199 real 0m1.533s 00:04:02.199 user 0m2.790s 00:04:02.199 sys 0m0.458s 00:04:02.199 13:51:00 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.199 13:51:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:02.199 ************************************ 00:04:02.199 END TEST spdkcli_tcp 00:04:02.199 ************************************ 00:04:02.199 13:51:00 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:02.199 13:51:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.199 13:51:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.199 13:51:00 -- common/autotest_common.sh@10 -- # set +x 00:04:02.199 ************************************ 00:04:02.200 START TEST dpdk_mem_utility 00:04:02.200 ************************************ 00:04:02.200 13:51:00 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:02.461 * Looking for test storage... 00:04:02.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:02.461 13:51:00 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:02.461 13:51:00 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:02.461 13:51:00 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:02.461 13:51:00 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.461 13:51:00 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:02.461 13:51:00 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.461 13:51:00 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:02.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.461 --rc genhtml_branch_coverage=1 00:04:02.461 --rc genhtml_function_coverage=1 00:04:02.462 --rc genhtml_legend=1 00:04:02.462 --rc geninfo_all_blocks=1 00:04:02.462 --rc geninfo_unexecuted_blocks=1 00:04:02.462 00:04:02.462 ' 00:04:02.462 13:51:00 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:02.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.462 --rc genhtml_branch_coverage=1 00:04:02.462 --rc genhtml_function_coverage=1 00:04:02.462 --rc genhtml_legend=1 00:04:02.462 --rc geninfo_all_blocks=1 00:04:02.462 --rc geninfo_unexecuted_blocks=1 00:04:02.462 00:04:02.462 ' 00:04:02.462 13:51:00 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:02.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.462 --rc genhtml_branch_coverage=1 00:04:02.462 --rc genhtml_function_coverage=1 00:04:02.462 --rc genhtml_legend=1 00:04:02.462 --rc geninfo_all_blocks=1 00:04:02.462 --rc geninfo_unexecuted_blocks=1 00:04:02.462 00:04:02.462 ' 00:04:02.462 13:51:00 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:02.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.462 --rc genhtml_branch_coverage=1 00:04:02.462 --rc genhtml_function_coverage=1 00:04:02.462 --rc genhtml_legend=1 00:04:02.462 --rc geninfo_all_blocks=1 00:04:02.462 --rc geninfo_unexecuted_blocks=1 00:04:02.462 00:04:02.462 ' 00:04:02.462 13:51:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:02.462 13:51:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=789481 00:04:02.462 13:51:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 789481 00:04:02.462 13:51:00 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 789481 ']' 00:04:02.462 13:51:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.462 13:51:00 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.462 13:51:00 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:02.462 13:51:00 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.462 13:51:00 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:02.462 13:51:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:02.462 [2024-10-30 13:51:00.748889] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:04:02.462 [2024-10-30 13:51:00.748972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid789481 ] 00:04:02.722 [2024-10-30 13:51:00.837663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.722 [2024-10-30 13:51:00.873342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.292 13:51:01 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:03.292 13:51:01 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:03.292 13:51:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:03.292 13:51:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:03.292 13:51:01 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.292 13:51:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:03.292 { 00:04:03.292 "filename": "/tmp/spdk_mem_dump.txt" 00:04:03.292 } 00:04:03.292 13:51:01 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.292 13:51:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:03.292 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:03.292 1 heaps totaling size 810.000000 MiB 00:04:03.292 size: 810.000000 MiB heap id: 0 00:04:03.292 end heaps---------- 00:04:03.292 9 mempools totaling size 595.772034 MiB 00:04:03.292 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:03.292 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:03.292 size: 92.545471 MiB name: bdev_io_789481 00:04:03.292 size: 50.003479 MiB name: msgpool_789481 00:04:03.292 size: 36.509338 MiB name: fsdev_io_789481 00:04:03.292 size: 21.763794 MiB name: PDU_Pool 00:04:03.292 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:03.292 size: 4.133484 MiB name: evtpool_789481 00:04:03.292 size: 0.026123 MiB name: Session_Pool 00:04:03.292 end mempools------- 00:04:03.292 6 memzones totaling size 4.142822 MiB 00:04:03.292 size: 1.000366 MiB name: RG_ring_0_789481 00:04:03.292 size: 1.000366 MiB name: RG_ring_1_789481 00:04:03.292 size: 1.000366 MiB name: RG_ring_4_789481 00:04:03.292 size: 1.000366 MiB name: RG_ring_5_789481 00:04:03.292 size: 0.125366 MiB name: RG_ring_2_789481 00:04:03.292 size: 0.015991 MiB name: RG_ring_3_789481 00:04:03.292 end memzones------- 00:04:03.292 13:51:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:03.555 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:03.555 list of free elements. size: 10.862488 MiB 00:04:03.555 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:03.555 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:03.555 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:03.555 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:03.555 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:03.555 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:03.555 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:03.555 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:03.555 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:03.555 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:03.555 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:03.555 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:03.556 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:03.556 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:03.556 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:03.556 list of standard malloc elements. size: 199.218628 MiB 00:04:03.556 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:03.556 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:03.556 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:03.556 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:03.556 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:03.556 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:03.556 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:03.556 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:03.556 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:03.556 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:03.556 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:03.556 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:03.556 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:03.556 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:03.556 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:03.556 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:03.556 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:03.556 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:03.556 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:03.556 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:03.556 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:03.556 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:03.556 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:03.556 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:03.556 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:03.556 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:03.556 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:03.556 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:03.556 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:03.556 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:03.556 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:03.556 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:03.556 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:03.556 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:03.556 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:03.556 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:03.556 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:03.556 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:03.556 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:03.556 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:03.556 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:03.556 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:03.556 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:03.556 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:03.556 list of memzone associated elements. size: 599.918884 MiB 00:04:03.556 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:03.556 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:03.556 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:03.556 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:03.556 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:03.556 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_789481_0 00:04:03.556 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:03.556 associated memzone info: size: 48.002930 MiB name: MP_msgpool_789481_0 00:04:03.556 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:03.556 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_789481_0 00:04:03.556 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:03.556 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:03.556 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:03.556 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:03.556 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:03.556 associated memzone info: size: 3.000122 MiB name: MP_evtpool_789481_0 00:04:03.556 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:03.556 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_789481 00:04:03.556 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:03.556 associated memzone info: size: 1.007996 MiB name: MP_evtpool_789481 00:04:03.556 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:03.556 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:03.556 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:03.556 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:03.556 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:03.556 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:03.556 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:03.556 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:03.556 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:03.556 associated memzone info: size: 1.000366 MiB name: RG_ring_0_789481 00:04:03.556 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:03.556 associated memzone info: size: 1.000366 MiB name: RG_ring_1_789481 00:04:03.556 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:03.556 associated memzone info: size: 1.000366 MiB name: RG_ring_4_789481 00:04:03.556 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:03.556 associated memzone info: size: 1.000366 MiB name: RG_ring_5_789481 00:04:03.556 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:03.556 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_789481 00:04:03.556 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:03.556 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_789481 00:04:03.556 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:03.556 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:03.556 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:03.556 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:03.556 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:03.556 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:03.556 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:03.556 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_789481 00:04:03.556 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:03.556 associated memzone info: size: 0.125366 MiB name: RG_ring_2_789481 00:04:03.556 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:03.556 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:03.556 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:03.556 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:03.556 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:03.556 associated memzone info: size: 0.015991 MiB name: RG_ring_3_789481 00:04:03.556 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:03.556 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:03.556 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:03.556 associated memzone info: size: 0.000183 MiB name: MP_msgpool_789481 00:04:03.556 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:03.556 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_789481 00:04:03.556 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:03.556 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_789481 00:04:03.556 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:03.556 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:03.556 13:51:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:03.556 13:51:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 789481 00:04:03.556 13:51:01 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 789481 ']' 00:04:03.556 13:51:01 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 789481 00:04:03.556 13:51:01 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:03.556 13:51:01 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:03.556 13:51:01 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 789481 00:04:03.556 13:51:01 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:03.556 13:51:01 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:03.556 13:51:01 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 789481' 00:04:03.556 killing process with pid 789481 00:04:03.556 13:51:01 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 789481 00:04:03.556 13:51:01 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 789481 00:04:03.819 00:04:03.819 real 0m1.392s 00:04:03.819 user 0m1.455s 00:04:03.819 sys 0m0.417s 00:04:03.819 13:51:01 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.819 13:51:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:03.819 ************************************ 00:04:03.819 END TEST dpdk_mem_utility 00:04:03.819 ************************************ 00:04:03.819 13:51:01 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:03.819 13:51:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.819 13:51:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.819 13:51:01 -- common/autotest_common.sh@10 -- # set +x 00:04:03.819 ************************************ 00:04:03.819 START TEST event 00:04:03.819 ************************************ 00:04:03.819 13:51:01 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:03.819 * Looking for test storage... 00:04:03.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:03.819 13:51:02 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:03.819 13:51:02 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:03.819 13:51:02 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:04.087 13:51:02 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:04.087 13:51:02 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.087 13:51:02 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.087 13:51:02 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.087 13:51:02 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.087 13:51:02 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.087 13:51:02 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.087 13:51:02 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.087 13:51:02 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.087 13:51:02 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.087 13:51:02 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.087 13:51:02 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.087 13:51:02 event -- scripts/common.sh@344 -- # case "$op" in 00:04:04.087 13:51:02 event -- scripts/common.sh@345 -- # : 1 00:04:04.087 13:51:02 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.087 13:51:02 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.087 13:51:02 event -- scripts/common.sh@365 -- # decimal 1 00:04:04.087 13:51:02 event -- scripts/common.sh@353 -- # local d=1 00:04:04.087 13:51:02 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.087 13:51:02 event -- scripts/common.sh@355 -- # echo 1 00:04:04.087 13:51:02 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.087 13:51:02 event -- scripts/common.sh@366 -- # decimal 2 00:04:04.087 13:51:02 event -- scripts/common.sh@353 -- # local d=2 00:04:04.087 13:51:02 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.087 13:51:02 event -- scripts/common.sh@355 -- # echo 2 00:04:04.087 13:51:02 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.087 13:51:02 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.087 13:51:02 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.087 13:51:02 event -- scripts/common.sh@368 -- # return 0 00:04:04.087 13:51:02 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.087 13:51:02 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:04.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.087 --rc genhtml_branch_coverage=1 00:04:04.087 --rc genhtml_function_coverage=1 00:04:04.087 --rc genhtml_legend=1 00:04:04.087 --rc geninfo_all_blocks=1 00:04:04.087 --rc geninfo_unexecuted_blocks=1 00:04:04.087 00:04:04.087 ' 00:04:04.087 13:51:02 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:04.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.087 --rc genhtml_branch_coverage=1 00:04:04.087 --rc genhtml_function_coverage=1 00:04:04.087 --rc genhtml_legend=1 00:04:04.087 --rc geninfo_all_blocks=1 00:04:04.087 --rc geninfo_unexecuted_blocks=1 00:04:04.087 00:04:04.087 ' 00:04:04.087 13:51:02 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:04.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.087 --rc genhtml_branch_coverage=1 00:04:04.087 --rc genhtml_function_coverage=1 00:04:04.087 --rc genhtml_legend=1 00:04:04.087 --rc geninfo_all_blocks=1 00:04:04.087 --rc geninfo_unexecuted_blocks=1 00:04:04.087 00:04:04.087 ' 00:04:04.087 13:51:02 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:04.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.087 --rc genhtml_branch_coverage=1 00:04:04.087 --rc genhtml_function_coverage=1 00:04:04.087 --rc genhtml_legend=1 00:04:04.087 --rc geninfo_all_blocks=1 00:04:04.087 --rc geninfo_unexecuted_blocks=1 00:04:04.087 00:04:04.087 ' 00:04:04.087 13:51:02 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:04.087 13:51:02 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:04.087 13:51:02 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:04.087 13:51:02 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:04.087 13:51:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.087 13:51:02 event -- common/autotest_common.sh@10 -- # set +x 00:04:04.087 ************************************ 00:04:04.087 START TEST event_perf 00:04:04.087 ************************************ 00:04:04.087 13:51:02 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:04.087 Running I/O for 1 seconds...[2024-10-30 13:51:02.222165] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:04:04.087 [2024-10-30 13:51:02.222265] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid789850 ] 00:04:04.087 [2024-10-30 13:51:02.312354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:04.087 [2024-10-30 13:51:02.355801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:04.087 [2024-10-30 13:51:02.355874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:04.087 [2024-10-30 13:51:02.356026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.087 [2024-10-30 13:51:02.356027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:05.478 Running I/O for 1 seconds... 00:04:05.478 lcore 0: 177209 00:04:05.478 lcore 1: 177212 00:04:05.478 lcore 2: 177210 00:04:05.478 lcore 3: 177212 00:04:05.478 done. 00:04:05.478 00:04:05.478 real 0m1.183s 00:04:05.478 user 0m4.098s 00:04:05.478 sys 0m0.082s 00:04:05.478 13:51:03 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.478 13:51:03 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:05.478 ************************************ 00:04:05.478 END TEST event_perf 00:04:05.478 ************************************ 00:04:05.478 13:51:03 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:05.478 13:51:03 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:05.478 13:51:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.479 13:51:03 event -- common/autotest_common.sh@10 -- # set +x 00:04:05.479 ************************************ 00:04:05.479 START TEST event_reactor 00:04:05.479 ************************************ 00:04:05.479 13:51:03 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:05.479 [2024-10-30 13:51:03.480724] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:04:05.479 [2024-10-30 13:51:03.480828] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid790171 ] 00:04:05.479 [2024-10-30 13:51:03.570880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.479 [2024-10-30 13:51:03.609289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.423 test_start 00:04:06.423 oneshot 00:04:06.423 tick 100 00:04:06.423 tick 100 00:04:06.423 tick 250 00:04:06.423 tick 100 00:04:06.423 tick 100 00:04:06.423 tick 100 00:04:06.423 tick 250 00:04:06.423 tick 500 00:04:06.423 tick 100 00:04:06.423 tick 100 00:04:06.423 tick 250 00:04:06.423 tick 100 00:04:06.423 tick 100 00:04:06.423 test_end 00:04:06.423 00:04:06.423 real 0m1.175s 00:04:06.423 user 0m1.093s 00:04:06.423 sys 0m0.078s 00:04:06.423 13:51:04 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.423 13:51:04 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:06.423 ************************************ 00:04:06.423 END TEST event_reactor 00:04:06.423 ************************************ 00:04:06.423 13:51:04 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:06.423 13:51:04 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:06.423 13:51:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.423 13:51:04 event -- common/autotest_common.sh@10 -- # set +x 00:04:06.423 ************************************ 00:04:06.423 START TEST event_reactor_perf 00:04:06.423 ************************************ 00:04:06.423 13:51:04 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:06.682 [2024-10-30 13:51:04.732658] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:04:06.682 [2024-10-30 13:51:04.732773] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid790521 ] 00:04:06.682 [2024-10-30 13:51:04.822657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.682 [2024-10-30 13:51:04.861438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.623 test_start 00:04:07.623 test_end 00:04:07.623 Performance: 539519 events per second 00:04:07.623 00:04:07.623 real 0m1.175s 00:04:07.623 user 0m1.086s 00:04:07.623 sys 0m0.085s 00:04:07.623 13:51:05 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.623 13:51:05 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:07.623 ************************************ 00:04:07.623 END TEST event_reactor_perf 00:04:07.623 ************************************ 00:04:07.884 13:51:05 event -- event/event.sh@49 -- # uname -s 00:04:07.884 13:51:05 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:07.885 13:51:05 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:07.885 13:51:05 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.885 13:51:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.885 13:51:05 event -- common/autotest_common.sh@10 -- # set +x 00:04:07.885 ************************************ 00:04:07.885 START TEST event_scheduler 00:04:07.885 ************************************ 00:04:07.885 13:51:05 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:07.885 * Looking for test storage... 00:04:07.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:07.885 13:51:06 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:07.885 13:51:06 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:07.885 13:51:06 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:07.885 13:51:06 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.885 13:51:06 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:07.885 13:51:06 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.885 13:51:06 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:07.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.885 --rc genhtml_branch_coverage=1 00:04:07.885 --rc genhtml_function_coverage=1 00:04:07.885 --rc genhtml_legend=1 00:04:07.885 --rc geninfo_all_blocks=1 00:04:07.885 --rc geninfo_unexecuted_blocks=1 00:04:07.885 00:04:07.885 ' 00:04:07.885 13:51:06 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:07.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.885 --rc genhtml_branch_coverage=1 00:04:07.885 --rc genhtml_function_coverage=1 00:04:07.885 --rc genhtml_legend=1 00:04:07.885 --rc geninfo_all_blocks=1 00:04:07.885 --rc geninfo_unexecuted_blocks=1 00:04:07.885 00:04:07.885 ' 00:04:07.885 13:51:06 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:07.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.885 --rc genhtml_branch_coverage=1 00:04:07.885 --rc genhtml_function_coverage=1 00:04:07.885 --rc genhtml_legend=1 00:04:07.885 --rc geninfo_all_blocks=1 00:04:07.885 --rc geninfo_unexecuted_blocks=1 00:04:07.885 00:04:07.885 ' 00:04:07.885 13:51:06 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:07.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.885 --rc genhtml_branch_coverage=1 00:04:07.885 --rc genhtml_function_coverage=1 00:04:07.885 --rc genhtml_legend=1 00:04:07.885 --rc geninfo_all_blocks=1 00:04:07.885 --rc geninfo_unexecuted_blocks=1 00:04:07.885 00:04:07.885 ' 00:04:07.885 13:51:06 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:07.885 13:51:06 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=790907 00:04:07.885 13:51:06 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:07.885 13:51:06 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 790907 00:04:07.885 13:51:06 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:07.885 13:51:06 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 790907 ']' 00:04:07.885 13:51:06 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.885 13:51:06 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:07.885 13:51:06 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.885 13:51:06 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:07.885 13:51:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:08.146 [2024-10-30 13:51:06.236054] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:04:08.146 [2024-10-30 13:51:06.236132] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid790907 ] 00:04:08.146 [2024-10-30 13:51:06.327738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:08.146 [2024-10-30 13:51:06.383968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.146 [2024-10-30 13:51:06.384127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:08.146 [2024-10-30 13:51:06.384293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:08.146 [2024-10-30 13:51:06.384292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:09.087 13:51:07 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:09.087 13:51:07 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:09.087 13:51:07 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:09.087 13:51:07 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.087 13:51:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:09.087 [2024-10-30 13:51:07.050536] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:09.087 [2024-10-30 13:51:07.050554] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:09.087 [2024-10-30 13:51:07.050563] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:09.088 [2024-10-30 13:51:07.050569] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:09.088 [2024-10-30 13:51:07.050575] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:09.088 13:51:07 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.088 13:51:07 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:09.088 13:51:07 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.088 13:51:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:09.088 [2024-10-30 13:51:07.112966] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:09.088 13:51:07 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.088 13:51:07 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:09.088 13:51:07 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.088 13:51:07 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.088 13:51:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:09.088 ************************************ 00:04:09.088 START TEST scheduler_create_thread 00:04:09.088 ************************************ 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:09.088 2 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:09.088 3 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:09.088 4 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:09.088 5 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:09.088 6 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:09.088 7 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:09.088 8 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:09.088 9 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.088 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:09.660 10 00:04:09.660 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.660 13:51:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:09.660 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.660 13:51:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.041 13:51:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.041 13:51:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:11.041 13:51:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:11.041 13:51:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.041 13:51:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.610 13:51:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.610 13:51:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:11.610 13:51:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.610 13:51:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.551 13:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.551 13:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:12.551 13:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:12.551 13:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.551 13:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:13.122 13:51:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.122 00:04:13.122 real 0m4.224s 00:04:13.122 user 0m0.028s 00:04:13.122 sys 0m0.004s 00:04:13.122 13:51:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.122 13:51:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:13.122 ************************************ 00:04:13.122 END TEST scheduler_create_thread 00:04:13.122 ************************************ 00:04:13.122 13:51:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:13.122 13:51:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 790907 00:04:13.122 13:51:11 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 790907 ']' 00:04:13.122 13:51:11 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 790907 00:04:13.122 13:51:11 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:13.122 13:51:11 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:13.122 13:51:11 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 790907 00:04:13.383 13:51:11 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:13.383 13:51:11 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:13.383 13:51:11 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 790907' 00:04:13.383 killing process with pid 790907 00:04:13.383 13:51:11 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 790907 00:04:13.383 13:51:11 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 790907 00:04:13.645 [2024-10-30 13:51:11.754889] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:13.645 00:04:13.645 real 0m5.944s 00:04:13.645 user 0m13.837s 00:04:13.645 sys 0m0.423s 00:04:13.645 13:51:11 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.645 13:51:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:13.645 ************************************ 00:04:13.645 END TEST event_scheduler 00:04:13.645 ************************************ 00:04:13.907 13:51:11 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:13.907 13:51:11 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:13.907 13:51:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.907 13:51:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.907 13:51:11 event -- common/autotest_common.sh@10 -- # set +x 00:04:13.907 ************************************ 00:04:13.907 START TEST app_repeat 00:04:13.907 ************************************ 00:04:13.907 13:51:11 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:13.907 13:51:11 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.907 13:51:11 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.907 13:51:11 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:13.907 13:51:11 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:13.907 13:51:11 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:13.907 13:51:11 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:13.907 13:51:12 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:13.907 13:51:12 event.app_repeat -- event/event.sh@19 -- # repeat_pid=792439 00:04:13.907 13:51:12 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.907 13:51:12 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:13.907 13:51:12 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 792439' 00:04:13.907 Process app_repeat pid: 792439 00:04:13.907 13:51:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:13.907 13:51:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:13.907 spdk_app_start Round 0 00:04:13.907 13:51:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 792439 /var/tmp/spdk-nbd.sock 00:04:13.907 13:51:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 792439 ']' 00:04:13.907 13:51:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:13.907 13:51:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.907 13:51:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:13.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:13.907 13:51:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.907 13:51:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:13.907 [2024-10-30 13:51:12.035876] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:04:13.907 [2024-10-30 13:51:12.035936] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid792439 ] 00:04:13.907 [2024-10-30 13:51:12.119300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:13.907 [2024-10-30 13:51:12.151524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:13.907 [2024-10-30 13:51:12.151525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.169 13:51:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:14.169 13:51:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:14.169 13:51:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:14.169 Malloc0 00:04:14.169 13:51:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:14.430 Malloc1 00:04:14.430 13:51:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:14.430 13:51:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.430 13:51:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:14.430 13:51:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:14.430 13:51:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.430 13:51:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:14.430 13:51:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:14.430 13:51:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.430 13:51:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:14.430 13:51:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:14.430 13:51:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.430 13:51:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:14.430 13:51:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:14.430 13:51:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:14.430 13:51:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:14.430 13:51:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:14.692 /dev/nbd0 00:04:14.692 13:51:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:14.692 13:51:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:14.692 13:51:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:14.692 13:51:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:14.692 13:51:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:14.692 13:51:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:14.692 13:51:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:14.692 13:51:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:14.692 13:51:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:14.692 13:51:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:14.692 13:51:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:14.692 1+0 records in 00:04:14.692 1+0 records out 00:04:14.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291876 s, 14.0 MB/s 00:04:14.692 13:51:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:14.692 13:51:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:14.692 13:51:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:14.692 13:51:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:14.692 13:51:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:14.692 13:51:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:14.692 13:51:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:14.692 13:51:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:14.953 /dev/nbd1 00:04:14.954 13:51:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:14.954 13:51:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:14.954 13:51:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:14.954 13:51:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:14.954 13:51:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:14.954 13:51:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:14.954 13:51:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:14.954 13:51:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:14.954 13:51:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:14.954 13:51:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:14.954 13:51:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:14.954 1+0 records in 00:04:14.954 1+0 records out 00:04:14.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210867 s, 19.4 MB/s 00:04:14.954 13:51:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:14.954 13:51:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:14.954 13:51:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:14.954 13:51:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:14.954 13:51:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:14.954 13:51:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:14.954 13:51:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:14.954 13:51:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:14.954 13:51:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.954 13:51:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:15.215 { 00:04:15.215 "nbd_device": "/dev/nbd0", 00:04:15.215 "bdev_name": "Malloc0" 00:04:15.215 }, 00:04:15.215 { 00:04:15.215 "nbd_device": "/dev/nbd1", 00:04:15.215 "bdev_name": "Malloc1" 00:04:15.215 } 00:04:15.215 ]' 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:15.215 { 00:04:15.215 "nbd_device": "/dev/nbd0", 00:04:15.215 "bdev_name": "Malloc0" 00:04:15.215 }, 00:04:15.215 { 00:04:15.215 "nbd_device": "/dev/nbd1", 00:04:15.215 "bdev_name": "Malloc1" 00:04:15.215 } 00:04:15.215 ]' 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:15.215 /dev/nbd1' 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:15.215 /dev/nbd1' 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:15.215 256+0 records in 00:04:15.215 256+0 records out 00:04:15.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123973 s, 84.6 MB/s 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:15.215 256+0 records in 00:04:15.215 256+0 records out 00:04:15.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123092 s, 85.2 MB/s 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:15.215 256+0 records in 00:04:15.215 256+0 records out 00:04:15.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135872 s, 77.2 MB/s 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:15.215 13:51:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:15.476 13:51:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:15.476 13:51:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:15.476 13:51:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:15.476 13:51:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:15.476 13:51:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:15.476 13:51:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:15.476 13:51:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:15.476 13:51:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:15.476 13:51:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:15.476 13:51:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:15.737 13:51:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:15.737 13:51:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:15.737 13:51:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:15.737 13:51:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:15.737 13:51:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:15.737 13:51:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:15.737 13:51:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:15.737 13:51:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:15.737 13:51:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:15.737 13:51:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.737 13:51:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:15.737 13:51:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:15.737 13:51:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:15.737 13:51:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:15.737 13:51:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:15.737 13:51:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:15.737 13:51:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:15.737 13:51:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:15.737 13:51:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:15.737 13:51:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:15.737 13:51:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:15.737 13:51:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:15.737 13:51:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:15.737 13:51:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:15.998 13:51:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:15.998 [2024-10-30 13:51:14.294259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:16.259 [2024-10-30 13:51:14.324310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:16.259 [2024-10-30 13:51:14.324310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.259 [2024-10-30 13:51:14.353383] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:16.259 [2024-10-30 13:51:14.353414] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:19.563 13:51:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:19.563 13:51:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:19.563 spdk_app_start Round 1 00:04:19.563 13:51:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 792439 /var/tmp/spdk-nbd.sock 00:04:19.563 13:51:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 792439 ']' 00:04:19.563 13:51:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:19.563 13:51:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:19.563 13:51:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:19.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:19.563 13:51:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:19.563 13:51:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:19.563 13:51:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:19.563 13:51:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:19.563 13:51:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:19.563 Malloc0 00:04:19.563 13:51:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:19.563 Malloc1 00:04:19.563 13:51:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:19.563 13:51:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.563 13:51:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:19.563 13:51:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:19.563 13:51:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.563 13:51:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:19.563 13:51:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:19.563 13:51:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.563 13:51:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:19.563 13:51:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:19.563 13:51:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.563 13:51:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:19.563 13:51:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:19.563 13:51:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:19.563 13:51:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.563 13:51:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:19.824 /dev/nbd0 00:04:19.824 13:51:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:19.824 13:51:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:19.824 13:51:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:19.824 13:51:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:19.824 13:51:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:19.824 13:51:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:19.824 13:51:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:19.824 13:51:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:19.824 13:51:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:19.824 13:51:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:19.824 13:51:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:19.824 1+0 records in 00:04:19.824 1+0 records out 00:04:19.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274842 s, 14.9 MB/s 00:04:19.824 13:51:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.824 13:51:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:19.824 13:51:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.824 13:51:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:19.824 13:51:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:19.824 13:51:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:19.824 13:51:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.824 13:51:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:20.085 /dev/nbd1 00:04:20.085 13:51:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:20.085 13:51:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:20.085 13:51:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:20.085 13:51:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:20.085 13:51:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:20.085 13:51:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:20.085 13:51:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:20.085 13:51:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:20.085 13:51:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:20.085 13:51:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:20.085 13:51:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:20.085 1+0 records in 00:04:20.085 1+0 records out 00:04:20.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167456 s, 24.5 MB/s 00:04:20.085 13:51:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:20.085 13:51:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:20.085 13:51:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:20.085 13:51:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:20.085 13:51:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:20.085 13:51:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:20.085 13:51:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:20.085 13:51:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:20.085 13:51:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.085 13:51:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:20.345 13:51:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:20.345 { 00:04:20.345 "nbd_device": "/dev/nbd0", 00:04:20.345 "bdev_name": "Malloc0" 00:04:20.345 }, 00:04:20.345 { 00:04:20.345 "nbd_device": "/dev/nbd1", 00:04:20.345 "bdev_name": "Malloc1" 00:04:20.345 } 00:04:20.345 ]' 00:04:20.345 13:51:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:20.345 { 00:04:20.346 "nbd_device": "/dev/nbd0", 00:04:20.346 "bdev_name": "Malloc0" 00:04:20.346 }, 00:04:20.346 { 00:04:20.346 "nbd_device": "/dev/nbd1", 00:04:20.346 "bdev_name": "Malloc1" 00:04:20.346 } 00:04:20.346 ]' 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:20.346 /dev/nbd1' 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:20.346 /dev/nbd1' 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:20.346 256+0 records in 00:04:20.346 256+0 records out 00:04:20.346 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127157 s, 82.5 MB/s 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:20.346 256+0 records in 00:04:20.346 256+0 records out 00:04:20.346 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122012 s, 85.9 MB/s 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:20.346 256+0 records in 00:04:20.346 256+0 records out 00:04:20.346 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012854 s, 81.6 MB/s 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:20.346 13:51:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:20.606 13:51:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:20.606 13:51:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:20.606 13:51:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:20.606 13:51:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:20.606 13:51:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:20.606 13:51:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:20.606 13:51:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:20.606 13:51:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:20.606 13:51:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:20.606 13:51:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:20.867 13:51:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:20.867 13:51:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:20.867 13:51:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:20.867 13:51:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:20.867 13:51:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:20.867 13:51:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:20.867 13:51:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:20.867 13:51:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:20.867 13:51:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:20.867 13:51:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.867 13:51:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:20.867 13:51:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:20.867 13:51:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:20.867 13:51:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:20.867 13:51:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:20.867 13:51:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:20.867 13:51:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:20.867 13:51:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:20.867 13:51:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:20.867 13:51:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:20.867 13:51:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:20.867 13:51:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:20.867 13:51:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:20.867 13:51:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:21.128 13:51:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:21.128 [2024-10-30 13:51:19.421220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:21.436 [2024-10-30 13:51:19.451157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.436 [2024-10-30 13:51:19.451157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.436 [2024-10-30 13:51:19.480643] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:21.436 [2024-10-30 13:51:19.480674] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:24.167 13:51:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:24.167 13:51:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:24.167 spdk_app_start Round 2 00:04:24.167 13:51:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 792439 /var/tmp/spdk-nbd.sock 00:04:24.167 13:51:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 792439 ']' 00:04:24.167 13:51:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:24.167 13:51:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.167 13:51:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:24.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:24.167 13:51:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.167 13:51:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:24.477 13:51:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.477 13:51:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:24.477 13:51:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.477 Malloc0 00:04:24.477 13:51:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.792 Malloc1 00:04:24.792 13:51:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.792 13:51:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.792 13:51:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.792 13:51:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:24.792 13:51:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.792 13:51:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:24.792 13:51:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.792 13:51:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.792 13:51:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.792 13:51:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:24.792 13:51:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.792 13:51:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:24.792 13:51:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:24.792 13:51:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:24.792 13:51:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.792 13:51:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:25.113 /dev/nbd0 00:04:25.113 13:51:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:25.113 13:51:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.113 1+0 records in 00:04:25.113 1+0 records out 00:04:25.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276502 s, 14.8 MB/s 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:25.113 13:51:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.113 13:51:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.113 13:51:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:25.113 /dev/nbd1 00:04:25.113 13:51:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:25.113 13:51:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.113 1+0 records in 00:04:25.113 1+0 records out 00:04:25.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291152 s, 14.1 MB/s 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:25.113 13:51:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.438 13:51:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:25.438 13:51:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:25.438 13:51:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.438 13:51:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.438 13:51:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.438 13:51:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.438 13:51:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.438 13:51:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:25.438 { 00:04:25.438 "nbd_device": "/dev/nbd0", 00:04:25.438 "bdev_name": "Malloc0" 00:04:25.438 }, 00:04:25.438 { 00:04:25.438 "nbd_device": "/dev/nbd1", 00:04:25.438 "bdev_name": "Malloc1" 00:04:25.438 } 00:04:25.438 ]' 00:04:25.438 13:51:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:25.438 { 00:04:25.438 "nbd_device": "/dev/nbd0", 00:04:25.438 "bdev_name": "Malloc0" 00:04:25.438 }, 00:04:25.438 { 00:04:25.438 "nbd_device": "/dev/nbd1", 00:04:25.439 "bdev_name": "Malloc1" 00:04:25.439 } 00:04:25.439 ]' 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:25.439 /dev/nbd1' 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:25.439 /dev/nbd1' 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:25.439 256+0 records in 00:04:25.439 256+0 records out 00:04:25.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121155 s, 86.5 MB/s 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:25.439 256+0 records in 00:04:25.439 256+0 records out 00:04:25.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119204 s, 88.0 MB/s 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:25.439 256+0 records in 00:04:25.439 256+0 records out 00:04:25.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129551 s, 80.9 MB/s 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.439 13:51:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:25.700 13:51:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:25.700 13:51:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:25.700 13:51:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:25.700 13:51:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.700 13:51:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.700 13:51:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:25.700 13:51:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:25.700 13:51:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.700 13:51:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.700 13:51:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:25.960 13:51:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:25.960 13:51:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:25.960 13:51:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:25.960 13:51:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.960 13:51:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.960 13:51:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:25.960 13:51:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:25.960 13:51:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.960 13:51:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.960 13:51:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.960 13:51:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:26.219 13:51:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:26.219 13:51:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:26.219 13:51:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:26.219 13:51:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:26.219 13:51:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:26.219 13:51:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:26.219 13:51:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:26.219 13:51:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:26.219 13:51:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:26.219 13:51:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:26.219 13:51:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:26.219 13:51:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:26.219 13:51:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:26.219 13:51:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:26.478 [2024-10-30 13:51:24.586912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.478 [2024-10-30 13:51:24.616947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.478 [2024-10-30 13:51:24.616950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.478 [2024-10-30 13:51:24.646010] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:26.478 [2024-10-30 13:51:24.646042] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:29.774 13:51:27 event.app_repeat -- event/event.sh@38 -- # waitforlisten 792439 /var/tmp/spdk-nbd.sock 00:04:29.774 13:51:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 792439 ']' 00:04:29.774 13:51:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:29.774 13:51:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.774 13:51:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:29.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:29.774 13:51:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.774 13:51:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:29.774 13:51:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.774 13:51:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:29.774 13:51:27 event.app_repeat -- event/event.sh@39 -- # killprocess 792439 00:04:29.774 13:51:27 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 792439 ']' 00:04:29.774 13:51:27 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 792439 00:04:29.774 13:51:27 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:29.774 13:51:27 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.774 13:51:27 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 792439 00:04:29.774 13:51:27 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.774 13:51:27 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.774 13:51:27 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 792439' 00:04:29.774 killing process with pid 792439 00:04:29.774 13:51:27 event.app_repeat -- common/autotest_common.sh@973 -- # kill 792439 00:04:29.774 13:51:27 event.app_repeat -- common/autotest_common.sh@978 -- # wait 792439 00:04:29.774 spdk_app_start is called in Round 0. 00:04:29.774 Shutdown signal received, stop current app iteration 00:04:29.774 Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 reinitialization... 00:04:29.774 spdk_app_start is called in Round 1. 00:04:29.774 Shutdown signal received, stop current app iteration 00:04:29.774 Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 reinitialization... 00:04:29.774 spdk_app_start is called in Round 2. 00:04:29.774 Shutdown signal received, stop current app iteration 00:04:29.774 Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 reinitialization... 00:04:29.774 spdk_app_start is called in Round 3. 00:04:29.774 Shutdown signal received, stop current app iteration 00:04:29.774 13:51:27 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:29.774 13:51:27 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:29.774 00:04:29.774 real 0m15.846s 00:04:29.774 user 0m34.851s 00:04:29.774 sys 0m2.267s 00:04:29.774 13:51:27 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.774 13:51:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:29.774 ************************************ 00:04:29.774 END TEST app_repeat 00:04:29.774 ************************************ 00:04:29.774 13:51:27 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:29.774 13:51:27 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:29.774 13:51:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.774 13:51:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.774 13:51:27 event -- common/autotest_common.sh@10 -- # set +x 00:04:29.774 ************************************ 00:04:29.774 START TEST cpu_locks 00:04:29.774 ************************************ 00:04:29.774 13:51:27 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:29.774 * Looking for test storage... 00:04:29.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:29.775 13:51:28 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:29.775 13:51:28 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:29.775 13:51:28 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:30.037 13:51:28 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:30.037 13:51:28 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.037 13:51:28 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.037 13:51:28 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.037 13:51:28 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.037 13:51:28 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.037 13:51:28 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.037 13:51:28 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.037 13:51:28 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.037 13:51:28 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.037 13:51:28 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.037 13:51:28 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.037 13:51:28 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:30.037 13:51:28 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:30.037 13:51:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.037 13:51:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.037 13:51:28 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:30.038 13:51:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:30.038 13:51:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.038 13:51:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:30.038 13:51:28 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.038 13:51:28 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:30.038 13:51:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:30.038 13:51:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.038 13:51:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:30.038 13:51:28 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.038 13:51:28 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.038 13:51:28 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.038 13:51:28 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:30.038 13:51:28 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.038 13:51:28 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:30.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.038 --rc genhtml_branch_coverage=1 00:04:30.038 --rc genhtml_function_coverage=1 00:04:30.038 --rc genhtml_legend=1 00:04:30.038 --rc geninfo_all_blocks=1 00:04:30.038 --rc geninfo_unexecuted_blocks=1 00:04:30.038 00:04:30.038 ' 00:04:30.038 13:51:28 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:30.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.038 --rc genhtml_branch_coverage=1 00:04:30.038 --rc genhtml_function_coverage=1 00:04:30.038 --rc genhtml_legend=1 00:04:30.038 --rc geninfo_all_blocks=1 00:04:30.038 --rc geninfo_unexecuted_blocks=1 00:04:30.038 00:04:30.038 ' 00:04:30.038 13:51:28 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:30.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.038 --rc genhtml_branch_coverage=1 00:04:30.038 --rc genhtml_function_coverage=1 00:04:30.038 --rc genhtml_legend=1 00:04:30.038 --rc geninfo_all_blocks=1 00:04:30.038 --rc geninfo_unexecuted_blocks=1 00:04:30.038 00:04:30.038 ' 00:04:30.038 13:51:28 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:30.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.038 --rc genhtml_branch_coverage=1 00:04:30.038 --rc genhtml_function_coverage=1 00:04:30.038 --rc genhtml_legend=1 00:04:30.038 --rc geninfo_all_blocks=1 00:04:30.038 --rc geninfo_unexecuted_blocks=1 00:04:30.038 00:04:30.038 ' 00:04:30.038 13:51:28 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:30.038 13:51:28 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:30.038 13:51:28 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:30.038 13:51:28 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:30.038 13:51:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.038 13:51:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.038 13:51:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:30.038 ************************************ 00:04:30.038 START TEST default_locks 00:04:30.038 ************************************ 00:04:30.038 13:51:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:30.038 13:51:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=796046 00:04:30.038 13:51:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 796046 00:04:30.038 13:51:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:30.038 13:51:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 796046 ']' 00:04:30.038 13:51:28 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.038 13:51:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.038 13:51:28 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.038 13:51:28 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.038 13:51:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:30.038 [2024-10-30 13:51:28.220917] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:04:30.038 [2024-10-30 13:51:28.220977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid796046 ] 00:04:30.038 [2024-10-30 13:51:28.306362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.299 [2024-10-30 13:51:28.341682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.869 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.869 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:30.869 13:51:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 796046 00:04:30.869 13:51:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 796046 00:04:30.869 13:51:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:31.439 lslocks: write error 00:04:31.439 13:51:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 796046 00:04:31.439 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 796046 ']' 00:04:31.439 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 796046 00:04:31.439 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:31.440 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.440 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 796046 00:04:31.440 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.440 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.440 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 796046' 00:04:31.440 killing process with pid 796046 00:04:31.440 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 796046 00:04:31.440 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 796046 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 796046 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 796046 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 796046 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 796046 ']' 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (796046) - No such process 00:04:31.699 ERROR: process (pid: 796046) is no longer running 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:31.699 00:04:31.699 real 0m1.687s 00:04:31.699 user 0m1.804s 00:04:31.699 sys 0m0.577s 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.699 13:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.699 ************************************ 00:04:31.699 END TEST default_locks 00:04:31.699 ************************************ 00:04:31.699 13:51:29 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:31.699 13:51:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.699 13:51:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.699 13:51:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.699 ************************************ 00:04:31.699 START TEST default_locks_via_rpc 00:04:31.699 ************************************ 00:04:31.699 13:51:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:31.699 13:51:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=796409 00:04:31.699 13:51:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 796409 00:04:31.699 13:51:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.699 13:51:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 796409 ']' 00:04:31.699 13:51:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.699 13:51:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.699 13:51:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.699 13:51:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.699 13:51:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.699 [2024-10-30 13:51:29.979550] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:04:31.699 [2024-10-30 13:51:29.979597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid796409 ] 00:04:31.960 [2024-10-30 13:51:30.064995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.960 [2024-10-30 13:51:30.095765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.534 13:51:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.534 13:51:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:32.534 13:51:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:32.534 13:51:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.534 13:51:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.534 13:51:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.534 13:51:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:32.534 13:51:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:32.534 13:51:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:32.534 13:51:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:32.534 13:51:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:32.534 13:51:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.534 13:51:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.534 13:51:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.534 13:51:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 796409 00:04:32.534 13:51:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 796409 00:04:32.534 13:51:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:33.107 13:51:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 796409 00:04:33.107 13:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 796409 ']' 00:04:33.107 13:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 796409 00:04:33.107 13:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:33.107 13:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.107 13:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 796409 00:04:33.107 13:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.107 13:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.107 13:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 796409' 00:04:33.107 killing process with pid 796409 00:04:33.107 13:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 796409 00:04:33.107 13:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 796409 00:04:33.368 00:04:33.368 real 0m1.640s 00:04:33.368 user 0m1.779s 00:04:33.368 sys 0m0.553s 00:04:33.368 13:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.368 13:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.368 ************************************ 00:04:33.368 END TEST default_locks_via_rpc 00:04:33.368 ************************************ 00:04:33.368 13:51:31 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:33.368 13:51:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.368 13:51:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.368 13:51:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:33.368 ************************************ 00:04:33.368 START TEST non_locking_app_on_locked_coremask 00:04:33.368 ************************************ 00:04:33.368 13:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:33.368 13:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=796790 00:04:33.368 13:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 796790 /var/tmp/spdk.sock 00:04:33.368 13:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:33.368 13:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 796790 ']' 00:04:33.368 13:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.368 13:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.368 13:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.368 13:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.368 13:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:33.629 [2024-10-30 13:51:31.695943] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:04:33.629 [2024-10-30 13:51:31.695991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid796790 ] 00:04:33.629 [2024-10-30 13:51:31.779776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.629 [2024-10-30 13:51:31.809712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.200 13:51:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.200 13:51:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:34.200 13:51:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:34.200 13:51:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=796831 00:04:34.200 13:51:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 796831 /var/tmp/spdk2.sock 00:04:34.200 13:51:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 796831 ']' 00:04:34.200 13:51:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:34.200 13:51:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.200 13:51:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:34.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:34.200 13:51:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.201 13:51:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:34.461 [2024-10-30 13:51:32.512133] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:04:34.461 [2024-10-30 13:51:32.512185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid796831 ] 00:04:34.461 [2024-10-30 13:51:32.597658] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:34.461 [2024-10-30 13:51:32.597682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.461 [2024-10-30 13:51:32.655876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.033 13:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.033 13:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:35.033 13:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 796790 00:04:35.033 13:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 796790 00:04:35.033 13:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:35.604 lslocks: write error 00:04:35.604 13:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 796790 00:04:35.604 13:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 796790 ']' 00:04:35.604 13:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 796790 00:04:35.604 13:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:35.604 13:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.604 13:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 796790 00:04:35.604 13:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.604 13:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.604 13:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 796790' 00:04:35.604 killing process with pid 796790 00:04:35.604 13:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 796790 00:04:35.604 13:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 796790 00:04:36.173 13:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 796831 00:04:36.173 13:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 796831 ']' 00:04:36.173 13:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 796831 00:04:36.173 13:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:36.173 13:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.173 13:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 796831 00:04:36.173 13:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.173 13:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.173 13:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 796831' 00:04:36.173 killing process with pid 796831 00:04:36.173 13:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 796831 00:04:36.173 13:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 796831 00:04:36.434 00:04:36.434 real 0m2.874s 00:04:36.434 user 0m3.195s 00:04:36.434 sys 0m0.884s 00:04:36.434 13:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.434 13:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:36.434 ************************************ 00:04:36.434 END TEST non_locking_app_on_locked_coremask 00:04:36.434 ************************************ 00:04:36.434 13:51:34 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:36.434 13:51:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.434 13:51:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.434 13:51:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.434 ************************************ 00:04:36.434 START TEST locking_app_on_unlocked_coremask 00:04:36.434 ************************************ 00:04:36.434 13:51:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:36.434 13:51:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=797412 00:04:36.434 13:51:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 797412 /var/tmp/spdk.sock 00:04:36.434 13:51:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:36.434 13:51:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 797412 ']' 00:04:36.434 13:51:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.434 13:51:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.434 13:51:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.434 13:51:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.434 13:51:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:36.434 [2024-10-30 13:51:34.650840] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:04:36.434 [2024-10-30 13:51:34.650897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid797412 ] 00:04:36.694 [2024-10-30 13:51:34.738467] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:36.694 [2024-10-30 13:51:34.738495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.694 [2024-10-30 13:51:34.772043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.264 13:51:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.264 13:51:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:37.264 13:51:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=797514 00:04:37.264 13:51:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 797514 /var/tmp/spdk2.sock 00:04:37.264 13:51:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:37.264 13:51:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 797514 ']' 00:04:37.264 13:51:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:37.264 13:51:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.264 13:51:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:37.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:37.264 13:51:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.264 13:51:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.264 [2024-10-30 13:51:35.485250] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:04:37.264 [2024-10-30 13:51:35.485302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid797514 ] 00:04:37.524 [2024-10-30 13:51:35.569727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.524 [2024-10-30 13:51:35.627959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.095 13:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.095 13:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:38.095 13:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 797514 00:04:38.095 13:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 797514 00:04:38.095 13:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:38.667 lslocks: write error 00:04:38.667 13:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 797412 00:04:38.667 13:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 797412 ']' 00:04:38.667 13:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 797412 00:04:38.667 13:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:38.667 13:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.667 13:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 797412 00:04:38.667 13:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.667 13:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.667 13:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 797412' 00:04:38.667 killing process with pid 797412 00:04:38.667 13:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 797412 00:04:38.667 13:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 797412 00:04:38.928 13:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 797514 00:04:38.928 13:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 797514 ']' 00:04:38.928 13:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 797514 00:04:38.928 13:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:38.928 13:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.928 13:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 797514 00:04:38.928 13:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.928 13:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.928 13:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 797514' 00:04:38.928 killing process with pid 797514 00:04:38.928 13:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 797514 00:04:38.928 13:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 797514 00:04:39.189 00:04:39.189 real 0m2.795s 00:04:39.189 user 0m3.121s 00:04:39.189 sys 0m0.855s 00:04:39.189 13:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.189 13:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.189 ************************************ 00:04:39.189 END TEST locking_app_on_unlocked_coremask 00:04:39.189 ************************************ 00:04:39.189 13:51:37 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:39.189 13:51:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.189 13:51:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.189 13:51:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.189 ************************************ 00:04:39.189 START TEST locking_app_on_locked_coremask 00:04:39.189 ************************************ 00:04:39.189 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:39.189 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=797894 00:04:39.189 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 797894 /var/tmp/spdk.sock 00:04:39.189 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.189 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 797894 ']' 00:04:39.189 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.189 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.189 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.189 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.189 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.449 [2024-10-30 13:51:37.509165] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:04:39.449 [2024-10-30 13:51:37.509208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid797894 ] 00:04:39.449 [2024-10-30 13:51:37.560136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.449 [2024-10-30 13:51:37.589665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.709 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.709 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:39.710 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=798034 00:04:39.710 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 798034 /var/tmp/spdk2.sock 00:04:39.710 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:39.710 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:39.710 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 798034 /var/tmp/spdk2.sock 00:04:39.710 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:39.710 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.710 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:39.710 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.710 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 798034 /var/tmp/spdk2.sock 00:04:39.710 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 798034 ']' 00:04:39.710 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:39.710 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.710 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:39.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:39.710 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.710 13:51:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.710 [2024-10-30 13:51:37.832273] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:04:39.710 [2024-10-30 13:51:37.832323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid798034 ] 00:04:39.710 [2024-10-30 13:51:37.921658] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 797894 has claimed it. 00:04:39.710 [2024-10-30 13:51:37.921697] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:40.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (798034) - No such process 00:04:40.282 ERROR: process (pid: 798034) is no longer running 00:04:40.283 13:51:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.283 13:51:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:40.283 13:51:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:40.283 13:51:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:40.283 13:51:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:40.283 13:51:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:40.283 13:51:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 797894 00:04:40.283 13:51:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 797894 00:04:40.283 13:51:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:40.855 lslocks: write error 00:04:40.856 13:51:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 797894 00:04:40.856 13:51:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 797894 ']' 00:04:40.856 13:51:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 797894 00:04:40.856 13:51:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:40.856 13:51:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.856 13:51:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 797894 00:04:40.856 13:51:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.856 13:51:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.856 13:51:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 797894' 00:04:40.856 killing process with pid 797894 00:04:40.856 13:51:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 797894 00:04:40.856 13:51:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 797894 00:04:40.856 00:04:40.856 real 0m1.668s 00:04:40.856 user 0m1.830s 00:04:40.856 sys 0m0.579s 00:04:40.856 13:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.856 13:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.856 ************************************ 00:04:40.856 END TEST locking_app_on_locked_coremask 00:04:40.856 ************************************ 00:04:41.117 13:51:39 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:41.117 13:51:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.117 13:51:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.117 13:51:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.117 ************************************ 00:04:41.117 START TEST locking_overlapped_coremask 00:04:41.117 ************************************ 00:04:41.117 13:51:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:41.117 13:51:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=798275 00:04:41.117 13:51:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 798275 /var/tmp/spdk.sock 00:04:41.117 13:51:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:41.117 13:51:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 798275 ']' 00:04:41.117 13:51:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.117 13:51:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.117 13:51:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.117 13:51:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.117 13:51:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.117 [2024-10-30 13:51:39.253036] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:04:41.117 [2024-10-30 13:51:39.253088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid798275 ] 00:04:41.117 [2024-10-30 13:51:39.339229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:41.117 [2024-10-30 13:51:39.374703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.117 [2024-10-30 13:51:39.374859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:41.117 [2024-10-30 13:51:39.374967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.058 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.058 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:42.058 13:51:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=798590 00:04:42.058 13:51:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 798590 /var/tmp/spdk2.sock 00:04:42.058 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:42.058 13:51:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:42.058 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 798590 /var/tmp/spdk2.sock 00:04:42.058 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:42.058 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.058 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:42.058 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.058 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 798590 /var/tmp/spdk2.sock 00:04:42.058 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 798590 ']' 00:04:42.059 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:42.059 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.059 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:42.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:42.059 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.059 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.059 [2024-10-30 13:51:40.113647] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:04:42.059 [2024-10-30 13:51:40.113700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid798590 ] 00:04:42.059 [2024-10-30 13:51:40.226953] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 798275 has claimed it. 00:04:42.059 [2024-10-30 13:51:40.226997] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:42.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (798590) - No such process 00:04:42.631 ERROR: process (pid: 798590) is no longer running 00:04:42.631 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.631 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:42.631 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:42.631 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:42.631 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:42.631 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:42.631 13:51:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:42.631 13:51:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:42.631 13:51:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:42.631 13:51:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:42.631 13:51:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 798275 00:04:42.631 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 798275 ']' 00:04:42.631 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 798275 00:04:42.631 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:42.631 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.631 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 798275 00:04:42.631 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.631 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.631 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 798275' 00:04:42.631 killing process with pid 798275 00:04:42.631 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 798275 00:04:42.631 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 798275 00:04:42.893 00:04:42.893 real 0m1.785s 00:04:42.893 user 0m5.166s 00:04:42.893 sys 0m0.403s 00:04:42.893 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.893 13:51:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.893 ************************************ 00:04:42.893 END TEST locking_overlapped_coremask 00:04:42.893 ************************************ 00:04:42.893 13:51:41 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:42.894 13:51:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.894 13:51:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.894 13:51:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.894 ************************************ 00:04:42.894 START TEST locking_overlapped_coremask_via_rpc 00:04:42.894 ************************************ 00:04:42.894 13:51:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:42.894 13:51:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=798708 00:04:42.894 13:51:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 798708 /var/tmp/spdk.sock 00:04:42.894 13:51:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:42.894 13:51:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 798708 ']' 00:04:42.894 13:51:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.894 13:51:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.894 13:51:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.894 13:51:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.894 13:51:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.894 [2024-10-30 13:51:41.117969] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:04:42.894 [2024-10-30 13:51:41.118023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid798708 ] 00:04:43.155 [2024-10-30 13:51:41.202704] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:43.155 [2024-10-30 13:51:41.202732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:43.155 [2024-10-30 13:51:41.239153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.155 [2024-10-30 13:51:41.239304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.155 [2024-10-30 13:51:41.239304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.727 13:51:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.727 13:51:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:43.727 13:51:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=798971 00:04:43.727 13:51:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 798971 /var/tmp/spdk2.sock 00:04:43.727 13:51:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 798971 ']' 00:04:43.727 13:51:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:43.727 13:51:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:43.727 13:51:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.727 13:51:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:43.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:43.727 13:51:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.727 13:51:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.727 [2024-10-30 13:51:41.973912] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:04:43.727 [2024-10-30 13:51:41.973967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid798971 ] 00:04:43.988 [2024-10-30 13:51:42.085673] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:43.988 [2024-10-30 13:51:42.085701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:43.988 [2024-10-30 13:51:42.159251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:43.988 [2024-10-30 13:51:42.162870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.988 [2024-10-30 13:51:42.162871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.561 [2024-10-30 13:51:42.771832] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 798708 has claimed it. 00:04:44.561 request: 00:04:44.561 { 00:04:44.561 "method": "framework_enable_cpumask_locks", 00:04:44.561 "req_id": 1 00:04:44.561 } 00:04:44.561 Got JSON-RPC error response 00:04:44.561 response: 00:04:44.561 { 00:04:44.561 "code": -32603, 00:04:44.561 "message": "Failed to claim CPU core: 2" 00:04:44.561 } 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 798708 /var/tmp/spdk.sock 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 798708 ']' 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.561 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.823 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.823 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:44.823 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 798971 /var/tmp/spdk2.sock 00:04:44.823 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 798971 ']' 00:04:44.823 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:44.823 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.823 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:44.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:44.823 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.823 13:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.085 13:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.085 13:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:45.085 13:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:45.085 13:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:45.086 13:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:45.086 13:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:45.086 00:04:45.086 real 0m2.089s 00:04:45.086 user 0m0.878s 00:04:45.086 sys 0m0.139s 00:04:45.086 13:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.086 13:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.086 ************************************ 00:04:45.086 END TEST locking_overlapped_coremask_via_rpc 00:04:45.086 ************************************ 00:04:45.086 13:51:43 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:45.086 13:51:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 798708 ]] 00:04:45.086 13:51:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 798708 00:04:45.086 13:51:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 798708 ']' 00:04:45.086 13:51:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 798708 00:04:45.086 13:51:43 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:45.086 13:51:43 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.086 13:51:43 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 798708 00:04:45.086 13:51:43 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.086 13:51:43 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.086 13:51:43 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 798708' 00:04:45.086 killing process with pid 798708 00:04:45.086 13:51:43 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 798708 00:04:45.086 13:51:43 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 798708 00:04:45.348 13:51:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 798971 ]] 00:04:45.348 13:51:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 798971 00:04:45.348 13:51:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 798971 ']' 00:04:45.348 13:51:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 798971 00:04:45.348 13:51:43 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:45.348 13:51:43 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.348 13:51:43 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 798971 00:04:45.348 13:51:43 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:45.348 13:51:43 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:45.348 13:51:43 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 798971' 00:04:45.348 killing process with pid 798971 00:04:45.348 13:51:43 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 798971 00:04:45.348 13:51:43 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 798971 00:04:45.610 13:51:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:45.610 13:51:43 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:45.610 13:51:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 798708 ]] 00:04:45.610 13:51:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 798708 00:04:45.610 13:51:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 798708 ']' 00:04:45.610 13:51:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 798708 00:04:45.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (798708) - No such process 00:04:45.610 13:51:43 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 798708 is not found' 00:04:45.610 Process with pid 798708 is not found 00:04:45.610 13:51:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 798971 ]] 00:04:45.610 13:51:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 798971 00:04:45.610 13:51:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 798971 ']' 00:04:45.610 13:51:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 798971 00:04:45.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (798971) - No such process 00:04:45.610 13:51:43 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 798971 is not found' 00:04:45.610 Process with pid 798971 is not found 00:04:45.610 13:51:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:45.610 00:04:45.610 real 0m15.788s 00:04:45.610 user 0m27.787s 00:04:45.610 sys 0m4.936s 00:04:45.610 13:51:43 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.610 13:51:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:45.610 ************************************ 00:04:45.610 END TEST cpu_locks 00:04:45.610 ************************************ 00:04:45.610 00:04:45.610 real 0m41.795s 00:04:45.610 user 1m23.042s 00:04:45.610 sys 0m8.297s 00:04:45.610 13:51:43 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.610 13:51:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.610 ************************************ 00:04:45.610 END TEST event 00:04:45.610 ************************************ 00:04:45.610 13:51:43 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:45.610 13:51:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.610 13:51:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.610 13:51:43 -- common/autotest_common.sh@10 -- # set +x 00:04:45.610 ************************************ 00:04:45.610 START TEST thread 00:04:45.610 ************************************ 00:04:45.610 13:51:43 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:45.873 * Looking for test storage... 00:04:45.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:45.873 13:51:43 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:45.873 13:51:43 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:04:45.873 13:51:43 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:45.873 13:51:44 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:45.873 13:51:44 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.873 13:51:44 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.873 13:51:44 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.873 13:51:44 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.873 13:51:44 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.873 13:51:44 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.873 13:51:44 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.873 13:51:44 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.873 13:51:44 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.873 13:51:44 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.873 13:51:44 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.873 13:51:44 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:45.873 13:51:44 thread -- scripts/common.sh@345 -- # : 1 00:04:45.873 13:51:44 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.873 13:51:44 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.873 13:51:44 thread -- scripts/common.sh@365 -- # decimal 1 00:04:45.873 13:51:44 thread -- scripts/common.sh@353 -- # local d=1 00:04:45.873 13:51:44 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.873 13:51:44 thread -- scripts/common.sh@355 -- # echo 1 00:04:45.873 13:51:44 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.873 13:51:44 thread -- scripts/common.sh@366 -- # decimal 2 00:04:45.873 13:51:44 thread -- scripts/common.sh@353 -- # local d=2 00:04:45.873 13:51:44 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.873 13:51:44 thread -- scripts/common.sh@355 -- # echo 2 00:04:45.873 13:51:44 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.873 13:51:44 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.873 13:51:44 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.873 13:51:44 thread -- scripts/common.sh@368 -- # return 0 00:04:45.873 13:51:44 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.873 13:51:44 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:45.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.873 --rc genhtml_branch_coverage=1 00:04:45.873 --rc genhtml_function_coverage=1 00:04:45.873 --rc genhtml_legend=1 00:04:45.873 --rc geninfo_all_blocks=1 00:04:45.873 --rc geninfo_unexecuted_blocks=1 00:04:45.873 00:04:45.873 ' 00:04:45.873 13:51:44 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:45.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.873 --rc genhtml_branch_coverage=1 00:04:45.873 --rc genhtml_function_coverage=1 00:04:45.873 --rc genhtml_legend=1 00:04:45.873 --rc geninfo_all_blocks=1 00:04:45.873 --rc geninfo_unexecuted_blocks=1 00:04:45.873 00:04:45.873 ' 00:04:45.873 13:51:44 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:45.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.873 --rc genhtml_branch_coverage=1 00:04:45.873 --rc genhtml_function_coverage=1 00:04:45.873 --rc genhtml_legend=1 00:04:45.873 --rc geninfo_all_blocks=1 00:04:45.873 --rc geninfo_unexecuted_blocks=1 00:04:45.873 00:04:45.873 ' 00:04:45.873 13:51:44 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:45.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.873 --rc genhtml_branch_coverage=1 00:04:45.873 --rc genhtml_function_coverage=1 00:04:45.873 --rc genhtml_legend=1 00:04:45.873 --rc geninfo_all_blocks=1 00:04:45.873 --rc geninfo_unexecuted_blocks=1 00:04:45.873 00:04:45.873 ' 00:04:45.873 13:51:44 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:45.873 13:51:44 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:45.873 13:51:44 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.873 13:51:44 thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.873 ************************************ 00:04:45.873 START TEST thread_poller_perf 00:04:45.873 ************************************ 00:04:45.873 13:51:44 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:45.873 [2024-10-30 13:51:44.089591] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:04:45.873 [2024-10-30 13:51:44.089695] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid799416 ] 00:04:46.134 [2024-10-30 13:51:44.176091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.134 [2024-10-30 13:51:44.206676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.134 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:47.079 [2024-10-30T12:51:45.378Z] ====================================== 00:04:47.079 [2024-10-30T12:51:45.378Z] busy:2407014702 (cyc) 00:04:47.079 [2024-10-30T12:51:45.378Z] total_run_count: 418000 00:04:47.079 [2024-10-30T12:51:45.378Z] tsc_hz: 2400000000 (cyc) 00:04:47.079 [2024-10-30T12:51:45.378Z] ====================================== 00:04:47.079 [2024-10-30T12:51:45.378Z] poller_cost: 5758 (cyc), 2399 (nsec) 00:04:47.079 00:04:47.079 real 0m1.171s 00:04:47.079 user 0m1.087s 00:04:47.079 sys 0m0.081s 00:04:47.079 13:51:45 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.079 13:51:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:47.079 ************************************ 00:04:47.079 END TEST thread_poller_perf 00:04:47.079 ************************************ 00:04:47.079 13:51:45 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:47.079 13:51:45 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:47.079 13:51:45 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.079 13:51:45 thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.079 ************************************ 00:04:47.079 START TEST thread_poller_perf 00:04:47.079 ************************************ 00:04:47.079 13:51:45 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:47.079 [2024-10-30 13:51:45.338392] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:04:47.079 [2024-10-30 13:51:45.338487] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid799764 ] 00:04:47.340 [2024-10-30 13:51:45.428473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.340 [2024-10-30 13:51:45.464430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.340 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:48.282 [2024-10-30T12:51:46.581Z] ====================================== 00:04:48.282 [2024-10-30T12:51:46.581Z] busy:2401330418 (cyc) 00:04:48.282 [2024-10-30T12:51:46.581Z] total_run_count: 5562000 00:04:48.282 [2024-10-30T12:51:46.581Z] tsc_hz: 2400000000 (cyc) 00:04:48.282 [2024-10-30T12:51:46.581Z] ====================================== 00:04:48.282 [2024-10-30T12:51:46.581Z] poller_cost: 431 (cyc), 179 (nsec) 00:04:48.282 00:04:48.282 real 0m1.174s 00:04:48.282 user 0m1.087s 00:04:48.282 sys 0m0.082s 00:04:48.282 13:51:46 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.282 13:51:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:48.282 ************************************ 00:04:48.282 END TEST thread_poller_perf 00:04:48.282 ************************************ 00:04:48.282 13:51:46 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:48.282 00:04:48.282 real 0m2.704s 00:04:48.282 user 0m2.366s 00:04:48.282 sys 0m0.351s 00:04:48.282 13:51:46 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.282 13:51:46 thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.282 ************************************ 00:04:48.282 END TEST thread 00:04:48.282 ************************************ 00:04:48.282 13:51:46 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:48.282 13:51:46 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:48.282 13:51:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.282 13:51:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.282 13:51:46 -- common/autotest_common.sh@10 -- # set +x 00:04:48.542 ************************************ 00:04:48.542 START TEST app_cmdline 00:04:48.542 ************************************ 00:04:48.542 13:51:46 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:48.542 * Looking for test storage... 00:04:48.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:48.542 13:51:46 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:48.542 13:51:46 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:04:48.542 13:51:46 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:48.542 13:51:46 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.542 13:51:46 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:48.542 13:51:46 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.542 13:51:46 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:48.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.542 --rc genhtml_branch_coverage=1 00:04:48.542 --rc genhtml_function_coverage=1 00:04:48.542 --rc genhtml_legend=1 00:04:48.542 --rc geninfo_all_blocks=1 00:04:48.542 --rc geninfo_unexecuted_blocks=1 00:04:48.542 00:04:48.542 ' 00:04:48.542 13:51:46 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:48.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.542 --rc genhtml_branch_coverage=1 00:04:48.542 --rc genhtml_function_coverage=1 00:04:48.542 --rc genhtml_legend=1 00:04:48.542 --rc geninfo_all_blocks=1 00:04:48.542 --rc geninfo_unexecuted_blocks=1 00:04:48.542 00:04:48.542 ' 00:04:48.542 13:51:46 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:48.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.542 --rc genhtml_branch_coverage=1 00:04:48.542 --rc genhtml_function_coverage=1 00:04:48.542 --rc genhtml_legend=1 00:04:48.542 --rc geninfo_all_blocks=1 00:04:48.542 --rc geninfo_unexecuted_blocks=1 00:04:48.542 00:04:48.542 ' 00:04:48.542 13:51:46 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:48.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.542 --rc genhtml_branch_coverage=1 00:04:48.542 --rc genhtml_function_coverage=1 00:04:48.542 --rc genhtml_legend=1 00:04:48.542 --rc geninfo_all_blocks=1 00:04:48.542 --rc geninfo_unexecuted_blocks=1 00:04:48.542 00:04:48.542 ' 00:04:48.542 13:51:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:48.542 13:51:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=800174 00:04:48.542 13:51:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 800174 00:04:48.543 13:51:46 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:48.543 13:51:46 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 800174 ']' 00:04:48.543 13:51:46 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.543 13:51:46 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.543 13:51:46 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.543 13:51:46 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.543 13:51:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:48.803 [2024-10-30 13:51:46.863524] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:04:48.803 [2024-10-30 13:51:46.863579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid800174 ] 00:04:48.803 [2024-10-30 13:51:46.947463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.803 [2024-10-30 13:51:46.978742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.374 13:51:47 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.374 13:51:47 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:04:49.374 13:51:47 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:49.636 { 00:04:49.636 "version": "SPDK v25.01-pre git sha1 1953a4915", 00:04:49.636 "fields": { 00:04:49.636 "major": 25, 00:04:49.636 "minor": 1, 00:04:49.636 "patch": 0, 00:04:49.636 "suffix": "-pre", 00:04:49.636 "commit": "1953a4915" 00:04:49.636 } 00:04:49.636 } 00:04:49.636 13:51:47 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:49.636 13:51:47 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:49.636 13:51:47 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:49.636 13:51:47 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:49.637 13:51:47 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:49.637 13:51:47 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:49.637 13:51:47 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:49.637 13:51:47 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.637 13:51:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:49.637 13:51:47 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.637 13:51:47 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:49.637 13:51:47 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:49.637 13:51:47 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:49.637 13:51:47 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:04:49.637 13:51:47 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:49.637 13:51:47 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:49.637 13:51:47 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.637 13:51:47 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:49.637 13:51:47 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.637 13:51:47 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:49.637 13:51:47 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.637 13:51:47 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:49.637 13:51:47 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:49.637 13:51:47 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:49.897 request: 00:04:49.897 { 00:04:49.897 "method": "env_dpdk_get_mem_stats", 00:04:49.897 "req_id": 1 00:04:49.897 } 00:04:49.897 Got JSON-RPC error response 00:04:49.897 response: 00:04:49.897 { 00:04:49.897 "code": -32601, 00:04:49.897 "message": "Method not found" 00:04:49.897 } 00:04:49.897 13:51:48 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:04:49.897 13:51:48 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:49.897 13:51:48 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:49.897 13:51:48 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:49.897 13:51:48 app_cmdline -- app/cmdline.sh@1 -- # killprocess 800174 00:04:49.897 13:51:48 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 800174 ']' 00:04:49.897 13:51:48 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 800174 00:04:49.897 13:51:48 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:04:49.897 13:51:48 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.897 13:51:48 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 800174 00:04:49.897 13:51:48 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.897 13:51:48 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.897 13:51:48 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 800174' 00:04:49.897 killing process with pid 800174 00:04:49.897 13:51:48 app_cmdline -- common/autotest_common.sh@973 -- # kill 800174 00:04:49.897 13:51:48 app_cmdline -- common/autotest_common.sh@978 -- # wait 800174 00:04:50.158 00:04:50.158 real 0m1.699s 00:04:50.158 user 0m2.067s 00:04:50.158 sys 0m0.430s 00:04:50.158 13:51:48 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.158 13:51:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:50.158 ************************************ 00:04:50.158 END TEST app_cmdline 00:04:50.158 ************************************ 00:04:50.158 13:51:48 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:50.158 13:51:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.158 13:51:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.158 13:51:48 -- common/autotest_common.sh@10 -- # set +x 00:04:50.158 ************************************ 00:04:50.158 START TEST version 00:04:50.158 ************************************ 00:04:50.158 13:51:48 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:50.418 * Looking for test storage... 00:04:50.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:50.418 13:51:48 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.418 13:51:48 version -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.418 13:51:48 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.418 13:51:48 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.418 13:51:48 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.418 13:51:48 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.418 13:51:48 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.419 13:51:48 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.419 13:51:48 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.419 13:51:48 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.419 13:51:48 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.419 13:51:48 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.419 13:51:48 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.419 13:51:48 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.419 13:51:48 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.419 13:51:48 version -- scripts/common.sh@344 -- # case "$op" in 00:04:50.419 13:51:48 version -- scripts/common.sh@345 -- # : 1 00:04:50.419 13:51:48 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.419 13:51:48 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.419 13:51:48 version -- scripts/common.sh@365 -- # decimal 1 00:04:50.419 13:51:48 version -- scripts/common.sh@353 -- # local d=1 00:04:50.419 13:51:48 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.419 13:51:48 version -- scripts/common.sh@355 -- # echo 1 00:04:50.419 13:51:48 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.419 13:51:48 version -- scripts/common.sh@366 -- # decimal 2 00:04:50.419 13:51:48 version -- scripts/common.sh@353 -- # local d=2 00:04:50.419 13:51:48 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.419 13:51:48 version -- scripts/common.sh@355 -- # echo 2 00:04:50.419 13:51:48 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.419 13:51:48 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.419 13:51:48 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.419 13:51:48 version -- scripts/common.sh@368 -- # return 0 00:04:50.419 13:51:48 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.419 13:51:48 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.419 --rc genhtml_branch_coverage=1 00:04:50.419 --rc genhtml_function_coverage=1 00:04:50.419 --rc genhtml_legend=1 00:04:50.419 --rc geninfo_all_blocks=1 00:04:50.419 --rc geninfo_unexecuted_blocks=1 00:04:50.419 00:04:50.419 ' 00:04:50.419 13:51:48 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.419 --rc genhtml_branch_coverage=1 00:04:50.419 --rc genhtml_function_coverage=1 00:04:50.419 --rc genhtml_legend=1 00:04:50.419 --rc geninfo_all_blocks=1 00:04:50.419 --rc geninfo_unexecuted_blocks=1 00:04:50.419 00:04:50.419 ' 00:04:50.419 13:51:48 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.419 --rc genhtml_branch_coverage=1 00:04:50.419 --rc genhtml_function_coverage=1 00:04:50.419 --rc genhtml_legend=1 00:04:50.419 --rc geninfo_all_blocks=1 00:04:50.419 --rc geninfo_unexecuted_blocks=1 00:04:50.419 00:04:50.419 ' 00:04:50.419 13:51:48 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.419 --rc genhtml_branch_coverage=1 00:04:50.419 --rc genhtml_function_coverage=1 00:04:50.419 --rc genhtml_legend=1 00:04:50.419 --rc geninfo_all_blocks=1 00:04:50.419 --rc geninfo_unexecuted_blocks=1 00:04:50.419 00:04:50.419 ' 00:04:50.419 13:51:48 version -- app/version.sh@17 -- # get_header_version major 00:04:50.419 13:51:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:50.419 13:51:48 version -- app/version.sh@14 -- # cut -f2 00:04:50.419 13:51:48 version -- app/version.sh@14 -- # tr -d '"' 00:04:50.419 13:51:48 version -- app/version.sh@17 -- # major=25 00:04:50.419 13:51:48 version -- app/version.sh@18 -- # get_header_version minor 00:04:50.419 13:51:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:50.419 13:51:48 version -- app/version.sh@14 -- # cut -f2 00:04:50.419 13:51:48 version -- app/version.sh@14 -- # tr -d '"' 00:04:50.419 13:51:48 version -- app/version.sh@18 -- # minor=1 00:04:50.419 13:51:48 version -- app/version.sh@19 -- # get_header_version patch 00:04:50.419 13:51:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:50.419 13:51:48 version -- app/version.sh@14 -- # cut -f2 00:04:50.419 13:51:48 version -- app/version.sh@14 -- # tr -d '"' 00:04:50.419 13:51:48 version -- app/version.sh@19 -- # patch=0 00:04:50.419 13:51:48 version -- app/version.sh@20 -- # get_header_version suffix 00:04:50.419 13:51:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:50.419 13:51:48 version -- app/version.sh@14 -- # cut -f2 00:04:50.419 13:51:48 version -- app/version.sh@14 -- # tr -d '"' 00:04:50.419 13:51:48 version -- app/version.sh@20 -- # suffix=-pre 00:04:50.419 13:51:48 version -- app/version.sh@22 -- # version=25.1 00:04:50.419 13:51:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:50.419 13:51:48 version -- app/version.sh@28 -- # version=25.1rc0 00:04:50.419 13:51:48 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:50.419 13:51:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:50.419 13:51:48 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:50.419 13:51:48 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:50.419 00:04:50.419 real 0m0.273s 00:04:50.419 user 0m0.154s 00:04:50.419 sys 0m0.167s 00:04:50.419 13:51:48 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.419 13:51:48 version -- common/autotest_common.sh@10 -- # set +x 00:04:50.419 ************************************ 00:04:50.419 END TEST version 00:04:50.419 ************************************ 00:04:50.419 13:51:48 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:50.419 13:51:48 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:50.419 13:51:48 -- spdk/autotest.sh@194 -- # uname -s 00:04:50.419 13:51:48 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:50.419 13:51:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:50.419 13:51:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:50.419 13:51:48 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:50.419 13:51:48 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:04:50.419 13:51:48 -- spdk/autotest.sh@256 -- # timing_exit lib 00:04:50.419 13:51:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:50.419 13:51:48 -- common/autotest_common.sh@10 -- # set +x 00:04:50.681 13:51:48 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:04:50.681 13:51:48 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:04:50.681 13:51:48 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:04:50.681 13:51:48 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:04:50.681 13:51:48 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:04:50.681 13:51:48 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:04:50.681 13:51:48 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:50.681 13:51:48 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:50.681 13:51:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.681 13:51:48 -- common/autotest_common.sh@10 -- # set +x 00:04:50.681 ************************************ 00:04:50.681 START TEST nvmf_tcp 00:04:50.681 ************************************ 00:04:50.681 13:51:48 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:50.681 * Looking for test storage... 00:04:50.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:50.681 13:51:48 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.681 13:51:48 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.681 13:51:48 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.681 13:51:48 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.681 13:51:48 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:50.681 13:51:48 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.681 13:51:48 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.681 --rc genhtml_branch_coverage=1 00:04:50.681 --rc genhtml_function_coverage=1 00:04:50.681 --rc genhtml_legend=1 00:04:50.681 --rc geninfo_all_blocks=1 00:04:50.681 --rc geninfo_unexecuted_blocks=1 00:04:50.681 00:04:50.681 ' 00:04:50.681 13:51:48 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.681 --rc genhtml_branch_coverage=1 00:04:50.681 --rc genhtml_function_coverage=1 00:04:50.681 --rc genhtml_legend=1 00:04:50.681 --rc geninfo_all_blocks=1 00:04:50.681 --rc geninfo_unexecuted_blocks=1 00:04:50.681 00:04:50.681 ' 00:04:50.681 13:51:48 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.681 --rc genhtml_branch_coverage=1 00:04:50.681 --rc genhtml_function_coverage=1 00:04:50.681 --rc genhtml_legend=1 00:04:50.681 --rc geninfo_all_blocks=1 00:04:50.681 --rc geninfo_unexecuted_blocks=1 00:04:50.681 00:04:50.681 ' 00:04:50.681 13:51:48 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.681 --rc genhtml_branch_coverage=1 00:04:50.681 --rc genhtml_function_coverage=1 00:04:50.681 --rc genhtml_legend=1 00:04:50.681 --rc geninfo_all_blocks=1 00:04:50.681 --rc geninfo_unexecuted_blocks=1 00:04:50.681 00:04:50.681 ' 00:04:50.681 13:51:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:50.942 13:51:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:50.942 13:51:48 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:50.942 13:51:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:50.942 13:51:48 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.942 13:51:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.942 ************************************ 00:04:50.942 START TEST nvmf_target_core 00:04:50.942 ************************************ 00:04:50.942 13:51:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:50.942 * Looking for test storage... 00:04:50.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.943 --rc genhtml_branch_coverage=1 00:04:50.943 --rc genhtml_function_coverage=1 00:04:50.943 --rc genhtml_legend=1 00:04:50.943 --rc geninfo_all_blocks=1 00:04:50.943 --rc geninfo_unexecuted_blocks=1 00:04:50.943 00:04:50.943 ' 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.943 --rc genhtml_branch_coverage=1 00:04:50.943 --rc genhtml_function_coverage=1 00:04:50.943 --rc genhtml_legend=1 00:04:50.943 --rc geninfo_all_blocks=1 00:04:50.943 --rc geninfo_unexecuted_blocks=1 00:04:50.943 00:04:50.943 ' 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.943 --rc genhtml_branch_coverage=1 00:04:50.943 --rc genhtml_function_coverage=1 00:04:50.943 --rc genhtml_legend=1 00:04:50.943 --rc geninfo_all_blocks=1 00:04:50.943 --rc geninfo_unexecuted_blocks=1 00:04:50.943 00:04:50.943 ' 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.943 --rc genhtml_branch_coverage=1 00:04:50.943 --rc genhtml_function_coverage=1 00:04:50.943 --rc genhtml_legend=1 00:04:50.943 --rc geninfo_all_blocks=1 00:04:50.943 --rc geninfo_unexecuted_blocks=1 00:04:50.943 00:04:50.943 ' 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:50.943 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:51.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:51.205 ************************************ 00:04:51.205 START TEST nvmf_abort 00:04:51.205 ************************************ 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:51.205 * Looking for test storage... 00:04:51.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:51.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.205 --rc genhtml_branch_coverage=1 00:04:51.205 --rc genhtml_function_coverage=1 00:04:51.205 --rc genhtml_legend=1 00:04:51.205 --rc geninfo_all_blocks=1 00:04:51.205 --rc geninfo_unexecuted_blocks=1 00:04:51.205 00:04:51.205 ' 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:51.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.205 --rc genhtml_branch_coverage=1 00:04:51.205 --rc genhtml_function_coverage=1 00:04:51.205 --rc genhtml_legend=1 00:04:51.205 --rc geninfo_all_blocks=1 00:04:51.205 --rc geninfo_unexecuted_blocks=1 00:04:51.205 00:04:51.205 ' 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:51.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.205 --rc genhtml_branch_coverage=1 00:04:51.205 --rc genhtml_function_coverage=1 00:04:51.205 --rc genhtml_legend=1 00:04:51.205 --rc geninfo_all_blocks=1 00:04:51.205 --rc geninfo_unexecuted_blocks=1 00:04:51.205 00:04:51.205 ' 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:51.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.205 --rc genhtml_branch_coverage=1 00:04:51.205 --rc genhtml_function_coverage=1 00:04:51.205 --rc genhtml_legend=1 00:04:51.205 --rc geninfo_all_blocks=1 00:04:51.205 --rc geninfo_unexecuted_blocks=1 00:04:51.205 00:04:51.205 ' 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.205 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:51.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:51.467 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:51.468 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:51.468 13:51:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:04:59.608 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:59.608 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:04:59.609 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:04:59.609 Found net devices under 0000:4b:00.0: cvl_0_0 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:04:59.609 Found net devices under 0000:4b:00.1: cvl_0_1 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:59.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:59.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:04:59.609 00:04:59.609 --- 10.0.0.2 ping statistics --- 00:04:59.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:59.609 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:59.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:59.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:04:59.609 00:04:59.609 --- 10.0.0.1 ping statistics --- 00:04:59.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:59.609 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:59.609 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:59.609 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:04:59.609 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:59.609 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:59.609 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:59.609 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=804634 00:04:59.609 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 804634 00:04:59.609 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:59.609 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 804634 ']' 00:04:59.609 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.609 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.609 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.609 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.610 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:59.610 [2024-10-30 13:51:57.069119] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:04:59.610 [2024-10-30 13:51:57.069186] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:59.610 [2024-10-30 13:51:57.175469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:59.610 [2024-10-30 13:51:57.229288] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:59.610 [2024-10-30 13:51:57.229343] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:59.610 [2024-10-30 13:51:57.229352] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:59.610 [2024-10-30 13:51:57.229359] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:59.610 [2024-10-30 13:51:57.229365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:59.610 [2024-10-30 13:51:57.231184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.610 [2024-10-30 13:51:57.231345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.610 [2024-10-30 13:51:57.231345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:59.610 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.610 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:04:59.610 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:59.610 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:59.610 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:59.871 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:59.871 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:04:59.871 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.871 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:59.871 [2024-10-30 13:51:57.950311] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:59.871 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.871 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:04:59.871 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.871 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:59.871 Malloc0 00:04:59.871 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.871 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:59.871 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.871 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:59.871 Delay0 00:04:59.871 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.871 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:04:59.871 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.871 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:59.871 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.871 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:04:59.871 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.871 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:59.871 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.871 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:04:59.871 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.871 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:59.871 [2024-10-30 13:51:58.034218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:59.871 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.871 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:59.871 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.871 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:59.871 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.871 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:04:59.871 [2024-10-30 13:51:58.144587] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:02.417 Initializing NVMe Controllers 00:05:02.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:02.417 controller IO queue size 128 less than required 00:05:02.417 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:02.417 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:02.417 Initialization complete. Launching workers. 00:05:02.417 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28733 00:05:02.417 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28794, failed to submit 62 00:05:02.417 success 28737, unsuccessful 57, failed 0 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:02.417 rmmod nvme_tcp 00:05:02.417 rmmod nvme_fabrics 00:05:02.417 rmmod nvme_keyring 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 804634 ']' 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 804634 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 804634 ']' 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 804634 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 804634 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 804634' 00:05:02.417 killing process with pid 804634 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 804634 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 804634 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:02.417 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:04.331 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:04.332 00:05:04.332 real 0m13.300s 00:05:04.332 user 0m13.709s 00:05:04.332 sys 0m6.630s 00:05:04.332 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.332 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:04.332 ************************************ 00:05:04.332 END TEST nvmf_abort 00:05:04.332 ************************************ 00:05:04.332 13:52:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:04.332 13:52:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:04.332 13:52:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.332 13:52:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:04.592 ************************************ 00:05:04.592 START TEST nvmf_ns_hotplug_stress 00:05:04.592 ************************************ 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:04.592 * Looking for test storage... 00:05:04.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:04.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.592 --rc genhtml_branch_coverage=1 00:05:04.592 --rc genhtml_function_coverage=1 00:05:04.592 --rc genhtml_legend=1 00:05:04.592 --rc geninfo_all_blocks=1 00:05:04.592 --rc geninfo_unexecuted_blocks=1 00:05:04.592 00:05:04.592 ' 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:04.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.592 --rc genhtml_branch_coverage=1 00:05:04.592 --rc genhtml_function_coverage=1 00:05:04.592 --rc genhtml_legend=1 00:05:04.592 --rc geninfo_all_blocks=1 00:05:04.592 --rc geninfo_unexecuted_blocks=1 00:05:04.592 00:05:04.592 ' 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:04.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.592 --rc genhtml_branch_coverage=1 00:05:04.592 --rc genhtml_function_coverage=1 00:05:04.592 --rc genhtml_legend=1 00:05:04.592 --rc geninfo_all_blocks=1 00:05:04.592 --rc geninfo_unexecuted_blocks=1 00:05:04.592 00:05:04.592 ' 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:04.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.592 --rc genhtml_branch_coverage=1 00:05:04.592 --rc genhtml_function_coverage=1 00:05:04.592 --rc genhtml_legend=1 00:05:04.592 --rc geninfo_all_blocks=1 00:05:04.592 --rc geninfo_unexecuted_blocks=1 00:05:04.592 00:05:04.592 ' 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:04.592 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:04.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:04.854 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:13.003 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:13.003 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:13.003 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:13.004 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:13.004 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:13.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:13.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:05:13.004 00:05:13.004 --- 10.0.0.2 ping statistics --- 00:05:13.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:13.004 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:13.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:13.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:05:13.004 00:05:13.004 --- 10.0.0.1 ping statistics --- 00:05:13.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:13.004 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=809484 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 809484 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 809484 ']' 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.004 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:13.004 [2024-10-30 13:52:10.524273] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:05:13.004 [2024-10-30 13:52:10.524340] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:13.004 [2024-10-30 13:52:10.625331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:13.004 [2024-10-30 13:52:10.677413] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:13.004 [2024-10-30 13:52:10.677467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:13.004 [2024-10-30 13:52:10.677475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:13.004 [2024-10-30 13:52:10.677483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:13.004 [2024-10-30 13:52:10.677490] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:13.004 [2024-10-30 13:52:10.679533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.004 [2024-10-30 13:52:10.679693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.004 [2024-10-30 13:52:10.679695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:13.267 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.267 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:13.267 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:13.267 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:13.267 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:13.267 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:13.267 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:13.267 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:13.267 [2024-10-30 13:52:11.563577] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:13.528 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:13.528 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:13.790 [2024-10-30 13:52:11.958714] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:13.790 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:14.051 13:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:14.313 Malloc0 00:05:14.313 13:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:14.313 Delay0 00:05:14.313 13:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.575 13:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:14.835 NULL1 00:05:14.836 13:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:15.096 13:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:15.096 13:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=810082 00:05:15.096 13:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:15.096 13:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.481 Read completed with error (sct=0, sc=11) 00:05:16.481 13:52:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.481 13:52:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:16.481 13:52:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:16.481 true 00:05:16.481 13:52:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:16.481 13:52:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.424 13:52:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.685 13:52:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:17.685 13:52:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:17.685 true 00:05:17.685 13:52:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:17.685 13:52:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.946 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.206 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:18.206 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:18.206 true 00:05:18.206 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:18.206 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.591 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.591 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:19.591 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:19.851 true 00:05:19.851 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:19.851 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.795 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.795 13:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:20.795 13:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:21.055 true 00:05:21.055 13:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:21.055 13:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.315 13:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.315 13:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:21.315 13:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:21.574 true 00:05:21.574 13:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:21.574 13:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.835 13:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.096 13:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:22.096 13:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:22.096 true 00:05:22.096 13:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:22.096 13:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.357 13:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.616 13:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:22.617 13:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:22.617 true 00:05:22.617 13:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:22.617 13:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.876 13:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.136 13:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:23.136 13:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:23.136 true 00:05:23.136 13:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:23.136 13:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.396 13:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.655 13:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:23.655 13:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:23.655 true 00:05:23.915 13:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:23.915 13:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.853 13:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.113 13:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:25.113 13:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:25.374 true 00:05:25.374 13:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:25.374 13:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.315 13:52:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.315 13:52:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:26.315 13:52:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:26.575 true 00:05:26.575 13:52:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:26.575 13:52:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.575 13:52:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.834 13:52:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:26.834 13:52:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:27.093 true 00:05:27.093 13:52:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:27.093 13:52:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.353 13:52:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.353 13:52:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:27.353 13:52:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:27.614 true 00:05:27.614 13:52:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:27.614 13:52:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.874 13:52:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.874 13:52:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:27.874 13:52:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:28.135 true 00:05:28.135 13:52:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:28.135 13:52:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.517 13:52:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.517 13:52:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:29.517 13:52:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:29.517 true 00:05:29.517 13:52:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:29.517 13:52:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.457 13:52:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.717 13:52:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:30.717 13:52:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:30.717 true 00:05:30.717 13:52:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:30.717 13:52:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.978 13:52:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.238 13:52:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:31.238 13:52:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:31.238 true 00:05:31.498 13:52:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:31.498 13:52:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.438 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.438 13:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.438 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.699 13:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:32.699 13:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:32.960 true 00:05:32.960 13:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:32.960 13:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.901 13:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.901 13:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:33.901 13:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:34.161 true 00:05:34.161 13:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:34.161 13:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.423 13:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.423 13:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:34.423 13:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:34.684 true 00:05:34.684 13:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:34.684 13:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.944 13:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.944 13:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:34.944 13:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:35.205 true 00:05:35.205 13:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:35.205 13:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.465 13:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.465 13:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:35.465 13:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:35.727 true 00:05:35.727 13:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:35.727 13:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.987 13:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.987 13:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:35.987 13:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:36.248 true 00:05:36.248 13:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:36.248 13:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.508 13:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.770 13:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:36.770 13:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:36.770 true 00:05:36.770 13:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:36.770 13:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.031 13:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.322 [2024-10-30 13:52:35.349411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.349477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.349508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.349539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.349569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.349599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.349627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.349655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.349683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.349709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.349739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.349776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.349806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.349835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.349863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.349893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.349923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.349951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.349978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.350971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.351026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.351054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.351101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.351130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.351178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.351205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.351234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.351263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.351294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.351324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.351353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.351380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.351743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.351783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.351826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.351861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.351898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.351929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.351958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.351987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.352010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.352039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.352067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.352097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.352128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.352156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.352187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.352214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.352243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.352271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.352300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.352328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.352357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.352386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.352413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.322 [2024-10-30 13:52:35.352443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.352473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.352500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.352530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.352560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.352589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.352619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.352648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.352675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.352703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.352730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.352756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.352783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.352813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.352842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.352870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.352899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.352928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.352957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.352984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.353020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.353047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.353075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.353102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.353137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.353165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.353214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.353241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.353270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.353299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.353331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.353360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.353393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.353421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.353458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.353486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.353525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.353554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.353583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.353612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.353647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.354990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.355968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.356013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.356042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.356068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.323 [2024-10-30 13:52:35.356099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.356231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.356264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.356293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.356321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.356351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.356382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.356406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.356434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.356470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.356506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.356533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.356562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.356592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.356619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.356649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.356681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.356972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.357972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.358904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.359111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.359145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.359183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.359211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.359237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.359266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.359294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.359322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.359348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.359374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.359399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.359428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.359458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.359495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.359531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.359562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.359592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.359622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.359651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.359681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.359710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.359740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.359777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.324 [2024-10-30 13:52:35.359811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.359839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.359866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.359894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.359923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.359954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.359987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.360884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.361981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.362997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.363052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.363080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 [2024-10-30 13:52:35.363109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.325 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:37.325 [2024-10-30 13:52:35.363659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.363689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.363718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.363749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.363777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.363808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.363838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.363869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.363898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.363931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.363959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.363987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.364992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.365979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.366008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.366039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.366070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.366098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.366128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.366440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.366473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.366504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.366532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.366562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.366590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.366616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.326 [2024-10-30 13:52:35.366640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.366663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.366687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.366711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.366736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.366771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.366802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.366833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.366861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.366889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.366917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.366945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.366982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.367999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.368986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.369930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.370221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.327 [2024-10-30 13:52:35.370249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.370990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.371992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.372017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.372048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.372684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.372711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.372742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.372774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.372800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.372828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.372861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.372893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.372921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.372948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.372977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.328 [2024-10-30 13:52:35.373874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.373898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.373927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.373955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.373985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.374980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.375987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.376014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.376043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.376074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.376102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.376134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.376159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.376189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.376220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.376258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.376289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.376323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.376351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.376382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.376411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.376443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.376475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.376504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.376875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.376909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.376936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.376972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.377001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.377034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.377062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.377098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.377128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.377163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.377192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.377222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.377252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.377281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.377311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.377340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.377367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.377396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.377424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.377453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.377484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.329 [2024-10-30 13:52:35.377510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.377537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.377566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.377598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.377624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.377652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.377680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.377710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.377739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.377772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.377810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.377846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.377876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.377907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.377931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.377964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.377994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.378743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.379302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.379332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.379362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.379392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.379423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.379455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.379485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.379515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.379542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.379573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.379603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.379633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.379664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.379691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.379721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.379754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.379781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.379813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.379844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.379873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.379902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.379932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.379960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.379988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.380016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.380045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.380072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.380100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.380130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.380159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.380188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.380218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.380245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.380273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.380305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.380335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.380363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.380396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.380426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.380452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.380481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.380511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.380539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.380567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.380597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.330 [2024-10-30 13:52:35.380623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.380653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.380682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.380711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.380738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.380771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.380796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.380827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.380863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.380899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.380926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.380954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.380983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.381998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.382982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.383010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.383039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.383067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.383097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.383126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.383156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.383189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 13:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:37.331 [2024-10-30 13:52:35.383556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 13:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:37.331 [2024-10-30 13:52:35.383590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.383621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.383652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.383680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.383713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.383743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.383790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.383820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.383854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.383883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.383926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.383955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.384016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.384046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.384085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.384114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.384145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.384174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.384205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.331 [2024-10-30 13:52:35.384233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.384262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.384299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.384332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.384375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.384410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.384446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.384490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.384528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.384557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.384584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.384611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.384645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.384677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.384707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.384736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.384781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.384818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.384852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.384880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.384908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.384938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.384968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.384998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.385029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.385058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.385085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.385115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.385146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.385178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.385206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.385232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.385260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.385291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.385321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.385355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.385384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.385411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.385440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.385474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.385504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.385539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.385569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.385939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.385973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.386986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.332 [2024-10-30 13:52:35.387767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.387797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.388356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.388397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.388426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.388459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.388487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.388514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.388542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.388579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.388612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.388644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.388676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.388702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.388730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.388764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.388794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.388821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.388850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.388877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.388905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.388934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.388971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.389994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.390970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.391007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.391045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.391074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.391107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.391139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.391171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.391196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.391223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.391253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.391279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.333 [2024-10-30 13:52:35.391305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.391334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.391371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.391408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.391432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.391462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.391492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.391523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.391552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.391582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.391609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.391637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.391667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.391697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.391727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.391762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.391787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.391817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.391846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.391875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.391904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.391934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.391963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.391992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.392022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.392049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.392078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.392108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.392141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.392171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.392200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.392230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.392269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.392300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.392332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.392362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.392397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.392769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.392800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.392836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.392866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.392895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.392924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.392955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.392984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.393784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.394120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.394155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.394185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.394215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.394245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.394274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.394303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.394332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.394366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.394395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.394425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.394454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.394484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.394512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.394541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.394572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.394603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.394635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.394666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.394699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.394727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.394775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.394809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.334 [2024-10-30 13:52:35.394839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.394867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.394896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.394926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.394953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.394980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.395972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.396990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.397017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.397483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.397516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.397547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.397577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.397607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.397636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.397667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.397697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.397726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.397760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.397800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:37.335 [2024-10-30 13:52:35.397829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.397881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.397914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.397949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.397984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.398014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.398045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.398075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.398104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.335 [2024-10-30 13:52:35.398132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.398991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.399017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.399047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.399077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.399103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.399132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.399161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.399189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.399215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.399239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.399270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.399298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.399326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.399356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.399384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.399516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.399546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.399576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.399609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.399970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.400988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.401015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.401046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.401076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.401104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.401131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.401161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.401191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.401220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.401251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.401284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.401312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.401341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.336 [2024-10-30 13:52:35.401370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.401404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.401435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.401467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.401494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.401519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.401549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.401580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.401606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.401663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.401695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.401853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.401888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.401917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.401945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.401973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.402974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.403965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.404424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.404460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.404489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.404518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.404546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.404578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.404606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.404641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.404669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.404697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.404726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.337 [2024-10-30 13:52:35.404763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.404793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.404828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.404855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.404881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.404908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.404934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.404963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.404997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.405990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.406853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.407994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.408036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.408073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.408114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.408149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.408173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.408202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.408231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.408258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.408288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.408316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.338 [2024-10-30 13:52:35.408348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.408378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.408408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.408432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.408464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.408491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.408523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.408554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.408582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.408613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.408645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.408676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.408708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.408737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.408769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.408796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.408825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.408854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.408883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.408914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.408944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.408976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.409972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.339 [2024-10-30 13:52:35.410999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.411027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.411059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.411091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.411563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.411594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.411622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.411649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.411678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.411709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.411736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.411772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.411801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.411837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.411865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.411916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.411948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.411988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.412999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.413859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.414316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.414348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.414383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.414413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.414448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.414476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.414521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.414548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.414583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.414615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.414643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.414670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.414701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.414728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.414763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.340 [2024-10-30 13:52:35.414790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.414820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.414852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.414881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.414929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.414957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.415986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.416997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.417993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.418022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.418052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.418079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.418108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.418139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.341 [2024-10-30 13:52:35.418167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.418198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.418228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.418355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.418383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.418411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.418438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.418696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.418725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.418760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.418791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.418817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.418847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.418875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.418903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.418931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.418961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.418989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.419973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.420008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.420046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.420079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.420107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.420138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.420163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.420191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.420219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.420250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.420279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.420307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.420334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.420364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.420390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.420421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.420447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.420970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.421992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.422031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.422060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.422091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.422120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.342 [2024-10-30 13:52:35.422150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.422859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.423003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.423035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.423063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.423092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.423551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.423583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.423628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.423657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.423690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.423721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.423757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.423785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.423814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.423846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.423877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.423905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.423933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.423961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.423992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.424978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.425976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.426004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.426034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.343 [2024-10-30 13:52:35.426062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.426989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.427018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.427047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.427075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.427104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.427135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.427167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.427196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.427225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.427254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.427283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.427314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.427342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.427373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.427400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.427527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.427554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.427583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.427614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.428977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.344 [2024-10-30 13:52:35.429004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.429894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.430972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.345 [2024-10-30 13:52:35.431760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.431791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.431819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.431852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.431880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.431909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.431939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.431966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.432095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.432124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.432155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.432185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.432222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.432252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.432303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.432331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.432360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.432388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.432417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.432444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.432473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.432500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.432526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.432555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.432582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.432610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.432641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.432671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.432717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.433981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:37.346 [2024-10-30 13:52:35.434567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.434992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.435022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.435054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.435089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.435118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.435151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.435179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.435234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.435264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.435324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.435355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.435404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.435433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.435493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.435520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.435576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.435606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.435662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.435691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.346 [2024-10-30 13:52:35.435722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.435754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.435782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.435810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.435837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.435866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.435893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.435921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.435947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.435979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.436971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.437996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.438900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.439429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.439463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.439489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.439516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.439543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.439569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.439597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.439627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.439658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.439696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.439733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.347 [2024-10-30 13:52:35.439771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.439795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.439826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.439857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.439886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.439924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.439957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.439987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.440993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.441976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.442006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.442038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.442067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.442102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.442138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.442175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.442212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.442252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.442534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.442571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.442605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.442631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.442662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.442693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.442724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.442755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.442785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.442817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.442846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.442876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.442903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.442934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.442961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.442989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.348 [2024-10-30 13:52:35.443016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.443995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.444939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.445284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.445313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.445342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.445370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.445403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.445433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.445498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.445528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.445578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.445606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.445651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.445680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.445715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.445744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.445779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.445807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.445839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.445868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.445897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.445928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.445959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.445988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.446017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.446046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.446082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.446112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.446141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.446171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.446200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.446239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.446267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.446301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.446329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.446357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.446387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.446416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.446445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.446477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.446506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.446537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.446566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.446593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.349 [2024-10-30 13:52:35.446624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.446653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.446682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.446709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.446739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.446775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.446809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.446853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.446881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.447986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.448999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.449028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.449056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.449084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.449218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.449248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.449305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.449334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.449376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.449405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.449467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.449496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.449526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.449553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.449584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.449619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.450312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.450343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.450368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.450434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.450463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.450499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.450527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.450556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.450593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.450620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.450649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.450675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.450705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.450734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.450767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.450796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.450825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.450855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.450886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.450914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.350 [2024-10-30 13:52:35.450941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.450968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.450995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.451969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.452996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.453025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.453053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.453086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.453119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.453151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.453501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.453533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.453560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.453590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.453620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.453649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.453677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.453707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.453734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.453777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.453806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.453855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.453886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.453920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.453951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.453985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.454015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.454043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.454072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.454098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.454125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.454158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.454188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.454216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.454247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.454283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.454318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.454357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.454392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.454421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.454450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.454478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.454506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.351 [2024-10-30 13:52:35.454530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.454560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.454589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.454778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.454808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.454834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.454861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.454884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.454913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.454939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.454973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.455988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.456984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.457012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.457042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.457069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.457094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.457125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.457152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.457180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.457209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.457238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.457267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.457296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.457323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.457353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.457381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.457409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.457437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.352 [2024-10-30 13:52:35.457460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.457492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.457522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.457551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.457581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.457796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.457834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.457861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.457890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.457919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.457953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.457984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.458895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.459471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.459504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.459531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.459567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.459596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.459653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.459682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.459711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.459741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.459777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.459806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.459834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.459863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.459892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.459922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.459949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.459985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.460985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.461014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.461042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.461072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.461102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.461131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.461161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.461190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.461217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.461242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.461272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.353 [2024-10-30 13:52:35.461299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.461354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.461385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.461528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.461556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.461591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.461620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.461649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.461678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.461715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.461742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.461774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.461802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.461830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.461860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.461890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.461916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.461946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.461974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.462002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.462031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.462073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.462110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.462135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.462164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.462193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.462218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.462246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.462275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.462305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.462647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.462679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.462711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.462744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.462778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.462809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.462836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.462866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.462894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.462922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.462949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.462979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.463996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.464020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.464049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.464076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.464104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.464133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.464164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.464192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.464221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.464246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.464274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.464303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.464333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.464364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.464394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.464423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.464452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.464491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.464519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.464548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.464577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.464612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.354 [2024-10-30 13:52:35.464742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.464777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.464807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.464838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.464868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.464895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.464926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.464956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.464988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.465782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.466298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.466331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.466363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.466392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.466428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.466458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.466490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.466517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.466552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.466579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.466610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.466641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.466675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.466705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.466735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.466770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.466819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.466850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.466887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.466915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.466946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.466973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.467997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.468026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.468059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.468091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.468121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.468149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.468177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.468206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.468236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.468261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.468295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.468421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.468459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.355 [2024-10-30 13:52:35.468489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.468520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.468547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.468578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.468608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.468642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.468671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.468699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.468727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.468759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.468787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.468815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.468850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.468879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.468910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.468939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.468968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.468997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.469026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.469054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.469081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.469112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.469142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.469168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.469204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.469797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.469827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.469857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.469886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.469914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.469973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.470969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.471997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.472028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.472061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.472089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.356 [2024-10-30 13:52:35.472134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:37.357 [2024-10-30 13:52:35.472164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.472221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.472249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.472627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.472656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.472686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.472714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.472741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.472775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.472819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.472847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.472885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.472914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.472946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.472976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.473975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.474992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.475021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.475053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.475081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.475109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.475141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.475166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.475193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.475222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.475252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.475282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.475312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.475345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.475375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.475405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.357 [2024-10-30 13:52:35.475435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.475464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.475492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.475521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.475549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.475577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.475608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.475641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.475669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.475698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.475734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.475766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.475794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.475825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.475854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.475883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.475913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.475957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.475988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.476026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.476055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.476085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.476114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.476145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.476183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.476212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.476260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.476291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.476424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.476454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.476484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.476514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.476543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.476571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.476599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.476632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.476660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.476689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.476719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.476764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.477989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.478978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.479008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.479039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.479069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.479098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.479127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.479191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.479221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.479253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.479282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.479311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.479339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.479369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.358 [2024-10-30 13:52:35.479400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.479430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.479454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.479483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.479513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.479539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.479566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.479605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.479639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.479672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.479700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.479727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.479759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.479787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.479816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.479840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.479874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.479903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.479931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.479958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.479987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.480875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.481010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.481042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.481070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.481098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.481126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.481154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.481187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.481215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.481244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.481271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.481300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.481327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.481783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.481816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.481846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.481870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.481894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.481923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.481955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.481984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.482014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.482045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.482078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.482107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.482137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.482169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.482194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.482217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.482242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.482266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.482290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.482314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.482338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.482363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.482387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.482410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.482435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.482460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.482485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.482509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.482533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.482557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.482582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.359 [2024-10-30 13:52:35.482606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.482630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.482655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.482679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.482702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.482726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.482754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.482785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.482816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.482852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.482882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.482913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.482943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.482976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.483985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.484977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.485010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.485037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.485097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.485127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.485161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.485192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.485230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.485259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.485292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.485320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.485348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.485378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.485406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.485437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.485467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.485500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.485529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.485559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.485589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.485620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.485651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.485685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.486051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.486084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.486112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.486145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.486175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.486204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.486232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.486263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.486292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.486324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.486352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.486383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.360 [2024-10-30 13:52:35.486410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.486440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.486464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.486498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.486526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.486565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.486610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.486638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.486669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.486697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.486728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.486760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.486788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.486816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.486849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.486878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.486910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.486938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.486977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.487997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.488026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.488160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.488193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.488219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.488251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.488493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.488535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.488565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.488598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.488626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.488656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.488683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.488712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.488742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.488776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.488806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.488837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.488866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.488897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.488929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.488955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.488984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.489967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.490000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.490030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.490060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.490092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.490122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.490152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.490181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.361 [2024-10-30 13:52:35.490211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.490240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.490602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.490634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.490665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.490694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.490721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.490754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.490788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.490822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.490851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.490879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.490906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.490937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.490962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.490991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.491989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.492027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.492058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.492090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.492117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.492147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.492176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.492206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.492531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.492569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.492597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.492625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.492656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.492693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.492725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.492761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.492789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.492818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.492846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.492875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.492903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.492932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.492959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.492988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.493999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.494023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.494052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.494083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.362 [2024-10-30 13:52:35.494112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.494143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.494173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.494201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.494231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.494264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.494292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.494340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.494368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.494399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.494428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.494558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.494588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.494616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.494650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.494681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.494714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.494741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.494772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.494809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.495992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.496860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.497028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.497060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.497090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.497140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.497169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.497200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.497230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.497260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.497289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.497318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.497355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.497382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.497419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.497448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.497486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.497514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.497553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.497583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.497613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.497643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.363 [2024-10-30 13:52:35.497673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.497706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.497734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.497770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.497799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.497829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.497859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.497887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.497916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.497945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.497989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.498985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.499017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.499048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.499206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.499238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.499271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.499302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.499329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.499358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.499386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.499414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.499447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.499906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.499940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.499979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.500007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.500037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.500067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.500097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.500130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.500160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.500195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.500227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.500257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.500286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.500315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.500381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.500412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.364 [2024-10-30 13:52:35.500445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.500482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.500507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.500540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.500570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.500600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.500628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.500659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.500687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.500718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.500755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.500783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.500811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.500844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.500871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.500898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.500928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.500955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.500988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.501977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.502973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.365 [2024-10-30 13:52:35.503875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.503905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.503936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.503965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.503996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.504027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.504058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.504086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.504529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.504565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.504594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.504623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.504652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.504706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.504735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.504771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.504802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.504830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.504865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.504890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.504922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.504952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.504981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.505990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.506018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.506047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.506077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.506105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.506133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.506167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.506194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.506220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.506248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.506277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.506306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.506337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.506368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.506398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.506430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.506461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.506601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.506632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.506660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.506688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.506716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.506745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.506778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:37.366 [2024-10-30 13:52:35.507074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.366 [2024-10-30 13:52:35.507879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.507909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.507940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.507971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.507998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.508999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.509985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.510951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.511095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.511153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.511183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.511221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.511250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.511283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.511311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.511757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.511790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.367 [2024-10-30 13:52:35.511819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.511850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.511878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.511906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.511938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.511969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.511999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.512999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.513967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.514019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.514051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.514098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.514130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.514167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.514198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.514231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.514262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.514295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.514324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.514351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.514383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.514410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.514442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.514470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.514501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.514531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.514560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.514590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.514620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.514649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.514679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.514708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.368 [2024-10-30 13:52:35.514737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.514767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.514793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.514817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.514850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.514883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.514911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.514939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.514968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.514996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.515023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.515052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.515080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.515116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.515159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.515199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.515225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.515256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.515286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.515316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.515350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.515379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.515408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.515441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.515470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.515499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.515531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.515677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.515707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.515735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.515767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.515824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.515854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.515886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.516330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.516360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.516389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.516419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.516450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.516479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.516511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.516540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.516577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.516608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.516639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.516669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.516700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.516732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.516765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.516809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.516840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.516868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.516899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.516928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.516960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.516992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.517992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.518022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.518052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.518244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.518275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.518305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.518340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.518369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.518407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.518439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.518502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.518534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.518564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.518592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.518622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.518649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.518679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.518709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.518743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.369 [2024-10-30 13:52:35.518776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.518826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.518854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.518888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.518915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.518953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.518982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.519975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.520002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.520034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.520062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.520096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.520124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.520153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.520182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.520218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.520246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.520379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.520407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.520434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.520461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.520486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.520515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.520545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.520775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.520805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.520835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.520863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.520890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.520921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.520951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.520981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.521981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.522008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.522038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.522067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.522097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.522125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.522154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.522186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.522244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.370 [2024-10-30 13:52:35.522275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.522310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.522338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.522372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.522401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.522430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.522857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.522890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.522922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.522953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.522989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.523975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.524996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.525987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.526015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.526044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.526073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.526102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.371 [2024-10-30 13:52:35.526132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.526933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.527307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.527341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.527365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.527396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.527436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.527463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.527495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.527526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.527556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.527582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.527614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.527645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.527675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.527704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.527733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.527766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.527795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.527824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.527853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.527884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.527912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.527938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.527968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.527996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 true 00:05:37.372 [2024-10-30 13:52:35.528930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.528990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.529020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.529047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.529075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.529104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.529132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.529160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.529289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.372 [2024-10-30 13:52:35.529318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.529346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.529376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.529407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.529430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.529458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.529910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.529943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.529971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.530999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.531889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.532023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.532053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.532085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.532114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.532145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.532174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.532204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.532233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.532264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.532568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.532600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.532629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.532661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.532693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.532727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.532758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.532788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.532816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.532847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.532879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.532909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.532937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.532965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.532993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.533022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.533052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.533078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.533105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.533134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.533163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.533190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.533216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.533243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.373 [2024-10-30 13:52:35.533273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.533299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.533331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.533360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.533392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.533423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.533450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.533478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.533505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.533534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.533561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.533588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.533618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.533648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.533678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.533706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.533733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.533767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.533796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.533824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.533855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.533882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.533938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.533965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.534994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.535950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.536002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.536033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.536080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.536110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.536143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.536176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.536210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.536345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.536378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.536408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.536436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.536464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.536496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.536524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.536553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.536583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.537031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.537062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.374 [2024-10-30 13:52:35.537107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.537992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.538975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.539975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.540011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.540035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.540066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.540095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.540122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.540153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.540183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.540211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.540239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.540266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.540296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.375 [2024-10-30 13:52:35.540324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.540352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.540379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.540408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.540436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.540468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.540493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.540521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.540552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.540583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.540606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.540639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.540744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.540773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.540796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.540823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.540854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.540886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.540914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.540946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.540976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.541266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.541296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.541326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.541358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.541385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.541416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.541444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.541474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.541504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.541538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.541566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.541598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.541627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.541658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.541689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.541721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.541757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.541795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.541831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.541870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.541914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.541943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.541971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.541995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.542836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:37.376 [2024-10-30 13:52:35.543198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.543228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.543259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.543287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.543315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.543348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.543378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.543408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.543437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.543464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.543497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.543527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.543556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.543587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.543615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.543643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.543669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.376 [2024-10-30 13:52:35.543697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.543723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.543754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.543783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.543809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.543833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.543865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.543897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.543928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.543953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.543982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.544976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.545006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.545035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.545064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.545200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.545229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.545269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.545297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.545328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.545359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.545386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.545415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.545443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.545687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.545714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.545743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.545775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.545803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.545827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.545859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.545889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.545917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.545947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.545976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.546992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.547023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.547052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.377 [2024-10-30 13:52:35.547103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.547134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.547165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.547192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.547222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.547594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.547627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.547656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.547685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.547714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.547742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.547792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.547820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.547876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.547907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.547936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.547967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.547998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.548031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.548060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.548087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.548117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.548147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.548183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.548216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.548253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.548287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.548315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.548344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.548370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.548404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.548433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.548460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.548489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.548517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.548542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.548569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.548596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.548628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.548905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.548939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.548969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.549979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.550906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.551073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.551101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.551129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.378 [2024-10-30 13:52:35.551158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.551871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.552475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.552508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.552538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.552565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.552588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.552613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.552636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.552659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.552683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.552707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.552732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.552761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.552785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.552809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.552832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.552858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.552882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.552905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.552929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.552952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.552976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.553834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 13:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:37.379 [2024-10-30 13:52:35.554928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.379 [2024-10-30 13:52:35.554957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.554986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 13:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.380 [2024-10-30 13:52:35.555329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.555968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.556002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.556131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.556162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.556189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.556221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.556251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.556279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.556308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.556337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.556364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.556394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.556422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.556451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.556884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.556915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.556944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.556973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.557970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.558001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.558031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.558061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.558090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.380 [2024-10-30 13:52:35.558122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.558150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.558180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.558203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.558227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.558251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.558276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.558302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.558336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.558366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.558395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.558425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.558456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.558486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.558517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.558548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.558584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.558612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.558643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.558670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.558707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.558842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.558874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.558903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.558938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.559976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.560902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.561249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.561291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.561326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.561351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.561382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.561411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.561439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.561466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.561496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.561523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.561548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.561587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.561625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.561661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.561696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.561719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.561754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.561782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.561816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.561844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.561875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.561905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.381 [2024-10-30 13:52:35.561935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.561971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.561997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.562999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.563029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.563058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.563081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.563110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.563139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.563169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.563194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.563223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.563253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.563368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.563401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.563430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.563462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.564979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.565899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.382 [2024-10-30 13:52:35.566078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.566981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.567989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.568019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.568153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.568183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.568214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.568244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.568690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.568732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.568772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.568800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.568830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.568857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.568886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.568916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.568944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.568973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.383 [2024-10-30 13:52:35.569678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.569702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.569726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.569755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.569779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.569802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.569826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.569850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.569880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.569910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.569939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.569964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.569993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.570927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.571991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.572022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.572052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.572087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.572115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.572143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.572173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.572203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.572230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.572258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.572287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.572316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.572349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.572378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.572407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.572441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.572468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.572500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.572872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.572905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.572934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.572974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.573005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.384 [2024-10-30 13:52:35.573043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.573989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.574954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.575019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.575049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.575089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.575118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.575147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.575176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.575205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.575235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.575270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.575706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.575734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.575763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.575788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.575811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.575838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.575868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.575897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.575929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.575961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.575990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.576018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.576047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.576082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.576111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.576145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.576179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.576209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.576237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.576263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.576292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.576327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.576357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.576383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.576413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.576448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.576477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.576508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.576536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.576564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.576599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.576627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.576662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.385 [2024-10-30 13:52:35.576693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.576730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.576762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.576793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.576825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.576854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.576883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.576914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.576951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.576988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.577985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.578988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.579016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.579040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.579066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.579094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.579122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.579153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.579183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.579209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.579237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.579264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.579295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.579324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.579354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.579517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.579546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 Message suppressed 999 times: [2024-10-30 13:52:35.579575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 Read completed with error (sct=0, sc=15) 00:05:37.386 [2024-10-30 13:52:35.579610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.579641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.579669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.579697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.579725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.579756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.579784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.579813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.579841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.580288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.580324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.580351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.580385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.580415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.580447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.580475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.580504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.580528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.580558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.386 [2024-10-30 13:52:35.580586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.580614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.580645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.580676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.580704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.580737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.580771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.580796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.580827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.580855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.580883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.580915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.580942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.580980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.581980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.582997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.583980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.387 [2024-10-30 13:52:35.584009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.584036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.584077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.584108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.584172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.584202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.584235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.584266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.584330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.584359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.584396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.584862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.584896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.584925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.584954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.584984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.585979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.586825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.587040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.587082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.587120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.587149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.587178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.587211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.587242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.587273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.587304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.587336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.587366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.587394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.587425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.587452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.587498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.587527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.587572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.587602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.587643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.587672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.587703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.588046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.588078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.588109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.588138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.588173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.588202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.588234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.388 [2024-10-30 13:52:35.588264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.588299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.588329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.588360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.588389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.588417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.588447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.588478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.588506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.588538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.588571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.588599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.588629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.588659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.588687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.588713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.588744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.588776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.588804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.588834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.588864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.588890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.588917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.588945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.588978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.589989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.590983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.591016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.591058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.591091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.591120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.591157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.591181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.591213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.591243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.389 [2024-10-30 13:52:35.591270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.591296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.591331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.591360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.591386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.591416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.591584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.591617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.591645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.591676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.591706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.591735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.591770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.591797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.591827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.591858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.591887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.591916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.591945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.591975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.592006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.592039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.592070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.592104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.592133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.592165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.592197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.592687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.592715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.592740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.592771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.592802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.592833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.592863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.592899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.592930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.592957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.592987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.593015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.593047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.593076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.593100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.593128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.593157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.593185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.390 [2024-10-30 13:52:35.593214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.679 [2024-10-30 13:52:35.593245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.679 [2024-10-30 13:52:35.593278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.679 [2024-10-30 13:52:35.593307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.679 [2024-10-30 13:52:35.593351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.679 [2024-10-30 13:52:35.593380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.679 [2024-10-30 13:52:35.593420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.679 [2024-10-30 13:52:35.593450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.679 [2024-10-30 13:52:35.593480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.593509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.593538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.593567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.593596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.593649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.593679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.593710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.593743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.593774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.593805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.593837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.593868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.593896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.593929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.593961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.593993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.594982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.595979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.596043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.596076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.596128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.596158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.596545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.596574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.596607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.596637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.596667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.596695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.596723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.596755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.596783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.596816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.596846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.596870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.596900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.596927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.596956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.596985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.597017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.597046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.597076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.597106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.597139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.597168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.680 [2024-10-30 13:52:35.597200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.597229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.597261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.597291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.597317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.597344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.597374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.597408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.597437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.597493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.597520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.597558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.597585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.597625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.597651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.597693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.597721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.597762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.597789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.597826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.597855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.597893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.597923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.597963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.597993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.598995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.599026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.599055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.599080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.599113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.599148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.599185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.599222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.599257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.599926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.599957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.599986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.600976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.601005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.601038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.601066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.601103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.601132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.601195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.681 [2024-10-30 13:52:35.601224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.601264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.601295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.601329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.601359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.601390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.601416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.601444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.601470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.601501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.601532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.601558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.601586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.601614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.601641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.601670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.601698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.601727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.601761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.601790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.601820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.601845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.601876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.601903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.602976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.603009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.603035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.603064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.603093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.603122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.603151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.603189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.603217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.603246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.603554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.603582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.603609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.603637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.603663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.603692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.603730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.603762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.603793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.603825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.603854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.603887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.603919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.603950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.603980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.604012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.604039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.604069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.604096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.604124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.604153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.604182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.604210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.604237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.604268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.604299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.604329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.604366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.604396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.604427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.604456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.604511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.604539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.604569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.604599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.604631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.604658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.604686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.682 [2024-10-30 13:52:35.604721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.604756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.604784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.604813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.604843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.604871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.604900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.604927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.604951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.604983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.605979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.606009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.606037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.606066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.606095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.606124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.606153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.606181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.606208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.606657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.606688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.606719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.606751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.606782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.606810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.606837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.606864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.606892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.606919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.606943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.606974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.607973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.608017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.608047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.608082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.608110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.608142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.608173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.608207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.608238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.608272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.608297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.608326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.683 [2024-10-30 13:52:35.608354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.608383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.608412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.608439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.608467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.608496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.608524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.608748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.608784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.608822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.608864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.608896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.608925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.608957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.608987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.609986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.610384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.610417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.610447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.610477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.610514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.610544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.610573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.610602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.610637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.610665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.610694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.610722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.610762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.610792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.610824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.610854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.610882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.610910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.610938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.610967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.610996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.611026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.611063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.611097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.611126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.611155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.611186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.611209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.611240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.611271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.611299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.611330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.611362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.611390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.611421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.611451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.611479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.611508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.611538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.684 [2024-10-30 13:52:35.611566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.611594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.611622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.611651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.611680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.611708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.611738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.611776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.611806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.611841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.611871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.611901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.611928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.611962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.611994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.612980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.613010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.613039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.613469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.613500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.613532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.613563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.613597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.613630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.613667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.613691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.613724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.613759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.613791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.613821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.613866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.613902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.613942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.613982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.614998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.615043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.615079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.615117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.615143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.615175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.615206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.615235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.615265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.615294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.685 [2024-10-30 13:52:35.615326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.615546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.615578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.615605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.615633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:37.686 [2024-10-30 13:52:35.615660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.615688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.615718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.615753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.615783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.615841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.615869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.615899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.615924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.615976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.616971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.617000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.617026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.617057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.617087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.617113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.617136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.617166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.617195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.617225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.617253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.617281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.617310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.617340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.617366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.617397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.617428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.617557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.617593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.617622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.617670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.617923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.617955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.617984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.618993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.619032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.686 [2024-10-30 13:52:35.619065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.619103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.619138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.619165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.619193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.619222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.619251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.619283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.619311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.619339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.619365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.619396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.619423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.619448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.619477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.619503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.619533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.619566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.619594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.619622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.619650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.619677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.619707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.620993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.621939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.622074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.622102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.622130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.622159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.622606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.622645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.622677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.622708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.622736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.622770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.622798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.622828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.622855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.622887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.622916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.622950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.622983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.687 [2024-10-30 13:52:35.623045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.623990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.624999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.625986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.688 [2024-10-30 13:52:35.626016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.626046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.626078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.626108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.626139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.626168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.626197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.626226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.626253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.626283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.626312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.626339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.626377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.626407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.626438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.626465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.626496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.626526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.626653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.626680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.626708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.626734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.626980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.627983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.628738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.689 [2024-10-30 13:52:35.629898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.629928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.629965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.629994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.630980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.631010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.631039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.631177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.631207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.631238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.631269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.631514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.631543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.631576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.631607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.631637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.631665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.631695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.631722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.631756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.631803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.631831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.631862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.631891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.631954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.631983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.632975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.633003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.633033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.633063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.633095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.633125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.633156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.633184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.633212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.633238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.633268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.633299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.633667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.633697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.690 [2024-10-30 13:52:35.633727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.633761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.633792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.633821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.633853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.633882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.633911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.633946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.633975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.634989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.635954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.636411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.636448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.636477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.636506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.636536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.636567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.636597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.636657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.636687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.636713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.636750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.636780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.636812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.636841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.636877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.636906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.636938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.636969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.636996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.637022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.637055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.637085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.637117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.637145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.637189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.637223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.637257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.637284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.637310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.637335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.637364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.637389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.637422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.637454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.637484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.637512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.637539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.637567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.637596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.691 [2024-10-30 13:52:35.637627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.637661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.637694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.637723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.637763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.637789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.637818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.637846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.637876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.637904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.637932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.637960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.637988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.638990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.639974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.640005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.640035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.640066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.640095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.640126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.640153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.640182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.640211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.640345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.640374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.640407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.640441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.641048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.641079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.641116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.641145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.692 [2024-10-30 13:52:35.641178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.641973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.642993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.643993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.644021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.644051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.644083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.644119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.644145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.644172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.644200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.644228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.644254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.644288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.644316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.644347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.644373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.644403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.644431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.644468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.644498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.644526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.644554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.693 [2024-10-30 13:52:35.644589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.644618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.644652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.644680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.644711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.644743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.644774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.644799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.644832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.644862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.644895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.644933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.644971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.645009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.645136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.645164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.645191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.645217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.645668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.645700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.645728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.645761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.645794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.645824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.645860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.645893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.645925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.645955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.645987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.646988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.647987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.648016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.648044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.648073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.648103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.648135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.648164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.648191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.648221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.648250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.648279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.648306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.648336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.648367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.648400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.694 [2024-10-30 13:52:35.648428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.648465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.648494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.648523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.648553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.648581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.648611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.648638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.648672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.648702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.648737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.648769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.648799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.648826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.648858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.648885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.648917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.648946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.648975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.649003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.649028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.649056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.649087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.649115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.649143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.649171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.649200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.649228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.649257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.649286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.649311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.649341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.649395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.649424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.649453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.649481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.649813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.649849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.649889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.649923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.649951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.649979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.650983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.651011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.651038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.651065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.651093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.651117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.651146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.651171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.651199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.651228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.651256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.651285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.651346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.651376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.651406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.651433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.651461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.651489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.651518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.651548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.651575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.651613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.651641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:37.695 [2024-10-30 13:52:35.652017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.652049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.652079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.652109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.652138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.652167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.695 [2024-10-30 13:52:35.652194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.652987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.653881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.654431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.654462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.654490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.654519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.654549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.654573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.654600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.654629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.654655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.654684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.654713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.654744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.654777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.654809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.654840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.654869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.654897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.654926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.654954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.654987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.655016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.655047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.655073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.655102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.655130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.655195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.655225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.655256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.655286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.655317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.655348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.655383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.655415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.655443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.655475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.696 [2024-10-30 13:52:35.655509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.655538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.655567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.655598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.655629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.655658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.655686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.655712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.655743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.655776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.655806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.655836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.655865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.655899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.655928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.655968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.655998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.656991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.657981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.658010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.658044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.658074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.658101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.658132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.658161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.658190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.658218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.658245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.658275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.658306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.658336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.658372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.658402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.658432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.658817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.658851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.658879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.658918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.658946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.658975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.659007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.659040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.659065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.659096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.659127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.659156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.697 [2024-10-30 13:52:35.659184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.659982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.660995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.661038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.661070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.661099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.661137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.661166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.661195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.661224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.661267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.661295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.661353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.661382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.661412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.661637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.661676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.661706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.661740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.661773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.661800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.661828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.661855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.661883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.661921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.661950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.661978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.698 [2024-10-30 13:52:35.662761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.662793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.662828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.662861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.662891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.662926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.662955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.662988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.663016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.663053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.663439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.663471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.663498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.663530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.663563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.663592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.663621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.663653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.663681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.663713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.663744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.663778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.663806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.663841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.663870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.663906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.663935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.663963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.663994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.664985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.665995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.666428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.666466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.666496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.666526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.666554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.666584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.666614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.666642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.666673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.666701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.699 [2024-10-30 13:52:35.666728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.666761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.666792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.666821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.666856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.666885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.666912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.666942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.666971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.666997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.667991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.668994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.700 [2024-10-30 13:52:35.669833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.669860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.669889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.669924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.669951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.670081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.670112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.670142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.670173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.670202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.670233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.670262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.670291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.670321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.670356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.670386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.670438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.670470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.670500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.670528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.670556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.670994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.671977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.672912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.673051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.673086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.673116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.673162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.673189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.701 [2024-10-30 13:52:35.673249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.673278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.673308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.673337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.673364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.673394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.673425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.673453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.673482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.673512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.673538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.673566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.673892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.673924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.673954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.673983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.674997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.675980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.676013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.676043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.676072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.676100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.676129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.676157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.676190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.676228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.676267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.676299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.676336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.676361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.676390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.676421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.676450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.676482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.676512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.676541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.702 [2024-10-30 13:52:35.676570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.676602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.676630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.676660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.676690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.676721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.676756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.676793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.676821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.676846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.676878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.676910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.676935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.676967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.676998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.677816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.678974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.703 [2024-10-30 13:52:35.679963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.679992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.680021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.680363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.680393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.680425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.680474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.680506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.680540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.680569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.680602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.680634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.680664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.680701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.680732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.680764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.680795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.680827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.680854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.680889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.680920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.680969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.681976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.682007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.682038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.682063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.682098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.682127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.682156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.682185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.682212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.682239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.682267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.682296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.682331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.682365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.682395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.682787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.682820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.682848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.682883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.682914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.682940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.682973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.683002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.683031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.683058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.683092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.683118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.683149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.683179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.683211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.683242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.683302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.683335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.683366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.683393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.683423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.683451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.683482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.683512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.683540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.683584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.683612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.704 [2024-10-30 13:52:35.683665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.683693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.683723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.683759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.683787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.683818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.683842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.683871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.683899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.683933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.683972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.683999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.684027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.684057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.684085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.684113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.684145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.684171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.684203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.684235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.684272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.684307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.684338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.684370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.684398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.684430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.684464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.684496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.684524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.684554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.684585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.684618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.684651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.684679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.684707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.684732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.685985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.705 [2024-10-30 13:52:35.686819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.686850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.686889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.686919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.686948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.686975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.687006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.687040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:37.706 [2024-10-30 13:52:35.687379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.687408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.687438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.687467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.687496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.687525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.687552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.687582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.687612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.687640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.687669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.687699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.687727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.687761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.687796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.687825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.687855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.687884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.687914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.687944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.687978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.688980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.689007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.689038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.689067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.689095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.689124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.689154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.689185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.689219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.689251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.689281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.689674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.689707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.689737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.689769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.689799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.689828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.689858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.689885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.689914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.689948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.689977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.690031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.690066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.690097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.690128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.690156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.690186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.690215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.690243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.690271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.690300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.690324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.690353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.706 [2024-10-30 13:52:35.690384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.690416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.690453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.690482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.690511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.690539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.690567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.690596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.690621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.690651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.690680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.690707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.690735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.690768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.690806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.690839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.690864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.690893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.690924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.690954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.690984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.691013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.691042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.691069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.691102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.691132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.691161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.691191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.691222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.691249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.691277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.691306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.691337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.691369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.691397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.691436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.691466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.691496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.691527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.691557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.691589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.692971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.693976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.694005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.694034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.694061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.694091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.707 [2024-10-30 13:52:35.694120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.694292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.694322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.694353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.694383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.694413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.694443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.694471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.694498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.694530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.694558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.694607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.694636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.694674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.694702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.694731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.694763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.695976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.696995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.708 [2024-10-30 13:52:35.697892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.697921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.697948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.697977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.698979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.699014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.699043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.699070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.699101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.699131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.699573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.699608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.699637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.699671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.699701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.699729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.699763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.699792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.699823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.699852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.699903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.699931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.699976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.700998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.701192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.701224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.701259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.701288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.701323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.701358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.701384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.701438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.701470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.701503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.701533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.701561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.701592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.701621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.701655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.701684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.701720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.709 [2024-10-30 13:52:35.701751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.701783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.701811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.701845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.701876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.701905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.701942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.701972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.702994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.703024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.703051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.703079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.703114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.703154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.703183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.703211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.703334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.703364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.703394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.703420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.703448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.703478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.703508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.703544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.703575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.703602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.703633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.703664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.703700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.703724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.703754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.703783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.704340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.704374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.704404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.704435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.704462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.704506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.704537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.704571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.704602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.704650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.704680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.704711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.704739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.704773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.704802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.704830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.704859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.704889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.704923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.704953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.704980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.705010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.705043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.705075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.705105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.705133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.705189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.705218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.705246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.710 [2024-10-30 13:52:35.705275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.705302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.705328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.705356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.705380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.705412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.705441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.705467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.705496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.705525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.705557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.705591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.705630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.705655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.705685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.705712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.705743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.705779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.705809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.705836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.705867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.705896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.705933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.705962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.705993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.706982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.707784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.711 13:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.711 [2024-10-30 13:52:35.901568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.901612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.901644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.901673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.901701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.901731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.901764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.901793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.901822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.901853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.901886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.901914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.901943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.901971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.711 [2024-10-30 13:52:35.901999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.902992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.903976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.904005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.904030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.904061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.904101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.904137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.904176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.904211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.904240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.904269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.904300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.904331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.904359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.904389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.904420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.904451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.904482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.904512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.712 [2024-10-30 13:52:35.905858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.905885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.905916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.905944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.905971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.905997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.906971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.907988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.908983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.909013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.909043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.909074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.909104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.909128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.909156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.909187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.909217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.909244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.909664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.909704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.713 [2024-10-30 13:52:35.909741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.909778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.909807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.909836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.909863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.909892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.909922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.909952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.909983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.910990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.911977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.912017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.912057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.912090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.912115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.912144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.912171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.912202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.912232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.912263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.912292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.912321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.912349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.912377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.912405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.912432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.912457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.912486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.912516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.912548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.912577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.912607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.912638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.912668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.714 [2024-10-30 13:52:35.912698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.912728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.912769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.912798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.912826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.912855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.912882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.912910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.912942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.912969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.913000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.913029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.913061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.913090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.913122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.913153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.913186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.913214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.913255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.913285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.913316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.913345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.913379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.913407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.913437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.913466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.913497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.913526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.913555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.913583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.913613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.913649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.913679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.914997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.915999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.916131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.916160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.916190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.916220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.916684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.916716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.916752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.916780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.715 [2024-10-30 13:52:35.916810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.916837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.916868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.916897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.916926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.916955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.916989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:37.716 [2024-10-30 13:52:35.917138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.917972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.918988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.919999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.920026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.920056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.716 [2024-10-30 13:52:35.920084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.920113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.920147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.920177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.920210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.920236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.920269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.920304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.920334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.920365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.920393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.920418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.920447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.920476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.920503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.920529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.920646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.920670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.920694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.920718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.921524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.921557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.921587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.921614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.921642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.921671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.921698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.921727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.921758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.921788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.921818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.921855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.921900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.921943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.921972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.922990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.923981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.924011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.924048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.924088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.924127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.924165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.924200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.924230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.924258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.924290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.924316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.924343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.924375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.924407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.924441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.924469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.924499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.924527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.924555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.924585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.717 [2024-10-30 13:52:35.924618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.924647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.924680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.924708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.924741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.924774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.924805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.924836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.924862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.924891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.924920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.924947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.924982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.925011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.925043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.925073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.925103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.925131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.925159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.925188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.925217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.925251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.925281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.925315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.925343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.925373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.925402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.925430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.925454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.925484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.925516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.925641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.925670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.925699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.925725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.926997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.927943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.928086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.928119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.928148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.928186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.928214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.718 [2024-10-30 13:52:35.928243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.928271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.928303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.928333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.928366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.928394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.928432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.928459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.928487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.928515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.928547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.928586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.928627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.928663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.928698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.928725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.928760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.928790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.928818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.928847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.928879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.928902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.928933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.928962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.928989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.929990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.930040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.930070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.930109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.930141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.930490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.930526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.930554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.930582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.930611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.930648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.930676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.930706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.930735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.930777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.930836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.930864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.930919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.930946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.930974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.931989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.932018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.932047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.932073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.932108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.932137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.932167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.719 [2024-10-30 13:52:35.932199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.932233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.932263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.932291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.932320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.932344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.932371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.932401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.932425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.932454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.932811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.932845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.932874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.932904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.932934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.932963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.932989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.933978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.934016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.934044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.934076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.934104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.934142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.934170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.934199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.934228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.934263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.934293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.934330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.934359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.934387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.934416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.934443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.934473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.934500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.934528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.934558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.934919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.934950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.934984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.935971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.936001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.936033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.936064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 13:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:37.720 [2024-10-30 13:52:35.936120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.936157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.936185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.936216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.936247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.936278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.936305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.936340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.720 [2024-10-30 13:52:35.936368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.936399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.936426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.936462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 13:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:37.721 [2024-10-30 13:52:35.936491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.936528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.936568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.936599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.936632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.936665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.936699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.936727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.936758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.936785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.936815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.936840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.937985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.721 [2024-10-30 13:52:35.938977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.939000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.939024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.939373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.939404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.939433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.939461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.939495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.939527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.939575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.939604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.939665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.939695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.939753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.939782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.939814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.939843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.939875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.939911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.939940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.939968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.939997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.940981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.941008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.941037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.941066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.941096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.941123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.941151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.941179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.941210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.941241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.941270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.941298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.941326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.941354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.941724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.941757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.941789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.941816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.941848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.941878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.941905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.722 [2024-10-30 13:52:35.941936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.941992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.942974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.943785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.944381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.944418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.944444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.944472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.944501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.944529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.944556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.944580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.944609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.944641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.944680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.944711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.944742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.944774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.944802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.944833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.944857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.944887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.944915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.944951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.944979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.723 [2024-10-30 13:52:35.945766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.945797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.945827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.945856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.945883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.945918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.945948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.945980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.946847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.947991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.948983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.949014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.949039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.949066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.949095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.949124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.949151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.949183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.949212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.949242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.949273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.949301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.949328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.949355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.949383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.949414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.949441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.949469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.949496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.724 [2024-10-30 13:52:35.949544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.949575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.949609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.949637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.949668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.949698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.949728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.949762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.949789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.949818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.949846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.949882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.949911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.949944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.949972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.950967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.951000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.951032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.951059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.951088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.951118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.951146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.951177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.951206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.951237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:37.725 [2024-10-30 13:52:35.951266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.951709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.951741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.951779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.951809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.951838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.951868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.951896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.951925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.951956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.951991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.952989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:38.005 [2024-10-30 13:52:35.953644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.953972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.954005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.954031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.954055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.954084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.954114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.954145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.954176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.954203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.954232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.954263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.954291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.005 [2024-10-30 13:52:35.954322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.954353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.954382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.954410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.954435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.954467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.954493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.954525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.954552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.954587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.954616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.954652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.954683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.954714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.954749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.954779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.954812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.954841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.954871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.954900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.954929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.954958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.954988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.955017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.955044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.955072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.955099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.955131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.955160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.955293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.955324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.955354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.955382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.955416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.955440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.955471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.955502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.955535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.955564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.955596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.955640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.955679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.955712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.955752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.955779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.956987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.957016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.957045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.957074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.957105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.957132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.957165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.957192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.957242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.957271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.957322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.957353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.957384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.957415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.957448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.957477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.957864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.957898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.957927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.957956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.957983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.958011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.958040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.958067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.958092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.958130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.958164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.958198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.006 [2024-10-30 13:52:35.958231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.958260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.958284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.958313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.958342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.958371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.958402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.958432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.958469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.958504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.958539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.958566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.958597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.958624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.958652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.958677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.958707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.958737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.958772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.958803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.958836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.958864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.959969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.960978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.961019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.961054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.961086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.961115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.961238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.961270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.961301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.961331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.961360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.961390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.961418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.961449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.961477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.961505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.961533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.961561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.961589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.961617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.961647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.961677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.961707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.961754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.961782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.007 [2024-10-30 13:52:35.961810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.961838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.961868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.961896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.961926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.961957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.961984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.962020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.962048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.962076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.962498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.962530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.962559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.962591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.962621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.962649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.962678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.962708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.962735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.962766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.962791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.962821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.962849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.962882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.962911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.962942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.962972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.963997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.964986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.965017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.965050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.965080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.965110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.965141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.965183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.965211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.965248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.965277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.965338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.965369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.965402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.965433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.965463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.965494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.965524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.965551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.965576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.008 [2024-10-30 13:52:35.965604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.965635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.966985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.967995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.009 [2024-10-30 13:52:35.968889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.968918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.968952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.968983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.969014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.969043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.969075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.969105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.969415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.969447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.969478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.969506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.969536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.969566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.969598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.969627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.969660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.969689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.969722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.969758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.969783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.969814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.969841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.969871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.969897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.969923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.969949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.969976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.970997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.971994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.972044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.972075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.972135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.972164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.972210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.972241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.972275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.972300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.972325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.972356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.972384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.972419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.972450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.010 [2024-10-30 13:52:35.972484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.973988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.974982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.975998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.976027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.976056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.976087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.976114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.976143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.976171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.976204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.976234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.976260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.976289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.011 [2024-10-30 13:52:35.976327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.976357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.976386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.976414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.976449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.976475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.976606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.976631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.976659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.976690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.976720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.976753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.976782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.976809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.976838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.976866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.976895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.976924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.976951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.976978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.977006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.977034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.977064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.977628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.977661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.977691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.977715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.977739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.977767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.977791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.977814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.977838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.977867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.977897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.977926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.977956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.977984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.978975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.979995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.980023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.012 [2024-10-30 13:52:35.980055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.980990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.981020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.981049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.981077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.981105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.981135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.981164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.981192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.981221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.981250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.981278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.981307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.981336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.981373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.981403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.981750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.981780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.981819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.981846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.981875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.981906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.981943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.981978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.982988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.983016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.983044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.983073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.983103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.983130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.983162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.013 [2024-10-30 13:52:35.983192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.983234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.983264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.983326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.983354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.983386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.983414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.983453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.983482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.983514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.983544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.983575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.983603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.983638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.983667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.984978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.985897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.986986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.014 [2024-10-30 13:52:35.987018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.987999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.988025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.988055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.988082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.988114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.988144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.988208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.988557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.988588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.988619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.988649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.988678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.988708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.988735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.988768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.988798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.988826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.988856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.988883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.988911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.988940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.988967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.988999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:38.015 [2024-10-30 13:52:35.989217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.989994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.990030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.990063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.990098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.990133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.990174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.990207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.990234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.990263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.990295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.990326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.990354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.990383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.990412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.990444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.990475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.990504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.990532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.990566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.990923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.990956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.990985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.015 [2024-10-30 13:52:35.991018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.991994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.992796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.993986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.994015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.994045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.994073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.994104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.994137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.994167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.994198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.994229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.994256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.994284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.994316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.994340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.994371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.994403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.994427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.994450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.016 [2024-10-30 13:52:35.994473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.994496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.994526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.994556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.994588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.994617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.994645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.994675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.994709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.994737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.994774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.994802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.994831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.994857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.994886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.994917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.994948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.994978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.995005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.995562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.995593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.995624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.995656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.995686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.995720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.995761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.995789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.995818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.995845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.995873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.995900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.995928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.995956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.995985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.996975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.017 [2024-10-30 13:52:35.997913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.997941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.997972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.997999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.998968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.999000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.999032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.999061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.999087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.999119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.999148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.999174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.999203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.999234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.999265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.999295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.999327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.999359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.999384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.999414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.999444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.999472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.999502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.999859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.999893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.999920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.999948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:35.999979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.000985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.001016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.001045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.001072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.001099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.001126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.001155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.001182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.001210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.001237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.001266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.001294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.001328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.001363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.001396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.001426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.018 [2024-10-30 13:52:36.001456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.001485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.001515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.001544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.001573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.001601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.001631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.001660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.001689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.001721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.001756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.002996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.003975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.004016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.004046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.004077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.004102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.004133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.004905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.004938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.004968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.004998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.019 [2024-10-30 13:52:36.005779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.005805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.005837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.005870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.005908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.005944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.005972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.006991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.007992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.008889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.009521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.009550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.009579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.009610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.009645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.009678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.009713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.009752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.020 [2024-10-30 13:52:36.009780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.009805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.009837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.009865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.009892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.009923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.009956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.009987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.010973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.011986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.012993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.013022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.013060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.013085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.013119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.013149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.021 [2024-10-30 13:52:36.013194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.013224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.013283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.013313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.013360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.013388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.013454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.013487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.013520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.013548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.013579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.013607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.013639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.013668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.014992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.015845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.016180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.016213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.016244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.016277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.016308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.016337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.016368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.016400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.016432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.016463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.016491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.016518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.016554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.016584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.016613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.022 [2024-10-30 13:52:36.016642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.016672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.016701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.016737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.016772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.016805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.016832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.016866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.016896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.016923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.016950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.016978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.017977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.018006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.018035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.018063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.018094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.018127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.018459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.018492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.018521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.018550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.018579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.018613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.018650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.018680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.018707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.018734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.018767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.018795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.018821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.018851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.018880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.018907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.018936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.018966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.018995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.019976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.020004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.020032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.020060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.020088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.020117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.020148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.023 [2024-10-30 13:52:36.020176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.020207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.020248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.020276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.020336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.020364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.020396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.021996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.022976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.023988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.024021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.024048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.024074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.024101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.024128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.024154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.024182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.024210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.024236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.024265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.024294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.024322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.024350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.024 [2024-10-30 13:52:36.024378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.024403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.024432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.024464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.024496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.024520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.024552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.024587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.024617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.024645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.024675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.024704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.024735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.024767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.024796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.024823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.024852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.024880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.024908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.024937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.024967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.024998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.025030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.025060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.025100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.025130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.025164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.025194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.025553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.025585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.025617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.025648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.025683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.025717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.025750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.025779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.025803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.025834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.025866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.025895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.025926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.025965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:38.025 [2024-10-30 13:52:36.026352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.026975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.027034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.027061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.027093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.027124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.027152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.027181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.027209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.027240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.027267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.027300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.027330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.027364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.027392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.027421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.027450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.027810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.027842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.027872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.027900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.027934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.027957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.027985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.028012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.028041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.028070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.028100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.028130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.025 [2024-10-30 13:52:36.028158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.028985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.029016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.029043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.029073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.029101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.029130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.029170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.029200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.029237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.029267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.029298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.029328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.029362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.029391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.029424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.029450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.029479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.029508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.029539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.029568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.029597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.029628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.029656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.029686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.030976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.031013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.031041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.031070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.031098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.031129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.031159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.031186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.031215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.031244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.031273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.026 [2024-10-30 13:52:36.031299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.031329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.031359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.031390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.031418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.031446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.031472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.031504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.031535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.031559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.031582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.031607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.031631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.031661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.031692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.031722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.031753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.031781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.031809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.031840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.031869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.031902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.032988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.033975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.034012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.034040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.034088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.034115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.034143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.034498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.034536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.034566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.034596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.034624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.034653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.034681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.034712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.034744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.034776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.034804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.034836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.034861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.034887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.034913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.034938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.034964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.034992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.035019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.035057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.027 [2024-10-30 13:52:36.035084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.035984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.036012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.036038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.036075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.036102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.036131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.036161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.036190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.036219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.036250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.036278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.036307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.036336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.036363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.036991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.037979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.028 [2024-10-30 13:52:36.038863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.038901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.039963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.040979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.041008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.041039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.041376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.041419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.041458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.041495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.041525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.041554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.041584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.041611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.041637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.041664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.041690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.041721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.041769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.041797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.041824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.041853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.041881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.041912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.041940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.041971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.042000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.042030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.042060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.042087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.042117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.042149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.042174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.042204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.042237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.042267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.042298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.042328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.042357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.042386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.042417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.042445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.042482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.042513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.042543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.029 [2024-10-30 13:52:36.042570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.042597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.042625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.042652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.042679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.042707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.042735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.042773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.042803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.042834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.042863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.042899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.042926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.042956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.042986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.043020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.043049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.043077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.043103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.043133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.043163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.043191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.043219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.043244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.043609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.043640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.043666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.043694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.043726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.043760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.043791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.043824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.043854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.043884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.043910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.227853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.228025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.228149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.228266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.228385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.228504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.228623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.228743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.228876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.229001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.229112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.229236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.229355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.229471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.229588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.229708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.229857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.229985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.230107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.230232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.230361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.230488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.230620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.230742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.230881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.231003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.231117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.231242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.231357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.231476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.231596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.231715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.231846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.231978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.232105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.232227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.232353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.232484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.232603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.232726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.232857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.232973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.233092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.233221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.233356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.233493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.233623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.233762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.233881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.234011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.234137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.234254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.234381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.237022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.237172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.237306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.237434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.237573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.237704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.030 [2024-10-30 13:52:36.237840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.237958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.238084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.238208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.238328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.238447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.238571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.238691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.238817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.238944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.239062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.239179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.239304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.239423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.239538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.239659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.239786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.239919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.240053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.240187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.240319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.240452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.240585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.240713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.240842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.240961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.241076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.241202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.241320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.241441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.241562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.241672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.241796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.241920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.242032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.242154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.242279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.242407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.242528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.242656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.242789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.242908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.243030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.243150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.243265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.243388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.243507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.243628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.243745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.243870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.243986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.244105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.244224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.244344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.244462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.244581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.244705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.244838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.245395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.245532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.245654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.245792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.245920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.246040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.246158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.246285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.246416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.246543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.246677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.246804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.246928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.247053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.247177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.247302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.247428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.247551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.247669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.247795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.247929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.248047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.248175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.248291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.248412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.248489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.248565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.248644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.248723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.248811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.248894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.248975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.249052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.249134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.249210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.249286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.249365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.249445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.249511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.249579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.249657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.249736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.249825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.249906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.249988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.250067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.250159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.031 [2024-10-30 13:52:36.250249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.250332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.250414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.250500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.250576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.250663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.250740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.250831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.250917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.251000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.251085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.251175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.251259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.251341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.251424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.251506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.253101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.253192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.253273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.253352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 true 00:05:38.032 [2024-10-30 13:52:36.253430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.253508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.253592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.253672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.253757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.253835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.253917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.253998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.254079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.254159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.254230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.254308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.254383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.254450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.254517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.254587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.254668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.254753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.254821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.254888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.254955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.255031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.255112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.255189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.255266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.255359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.255444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.255523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.255608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.255699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.255787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.255870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.255948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.256044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.256130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.256211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.256297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.256378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.256465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.256542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.256624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.256704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.256787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.256866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.256954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.257031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.257109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.257183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.257265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.257355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.257446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.257533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.257617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.257699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.257787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.257871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.257953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.258032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.258120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.258211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.258575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.258654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.258738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.258824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.258907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.258985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.259070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.259144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.259221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.259311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.259384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.259463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.259547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.259626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.259706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.259788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.259866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.259934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.260019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.260101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.260187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.260265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.260351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.260440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.260521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.032 [2024-10-30 13:52:36.260605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.260682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.260769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.260851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.260926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.261015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.261090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.261169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.261248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.261331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.261410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.261488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.261567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.261646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.261726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.261821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.261900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.261984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.262059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.262146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.262230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.262322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.262406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.262477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.262531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.262585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.262637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.262701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.262762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.262816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.262869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.262924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.262986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.263039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.263090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.263140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.263194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.263246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.264318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.264390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.264447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.264500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.264568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.264632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.264699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.264766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.264818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.264875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.264931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.264982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.265034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.265095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.265148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.265200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.265249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.265303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.265366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.265420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.265471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.265527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.265584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.265635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.265689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.265739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.265797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.265853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.265906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.265961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.266014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.266069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.266127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.266189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.266245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.266301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.266353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.266412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.266467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.266519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.266584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.266646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.266696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.266756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.266815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.266866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.266919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.266976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.267035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.267090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.267143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.267200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.267254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.267309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.267367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.267421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.267481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.267541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.267594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.267648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.267705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.267761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.267817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.267881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.268166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.268236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.268298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.033 [2024-10-30 13:52:36.268351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.268403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.268461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.268518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.268572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.268625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.268676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.268739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.268797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.268851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.268908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.268961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.269014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.269067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.269120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.269184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.269241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.269307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.269357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.269413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.269468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.269521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.269576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.269635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.269696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.269756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.269803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.269862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.269913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.269965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.270026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.270081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.270134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.270181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.270231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.270286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.270343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.270400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.270454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.270512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.270562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.270610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.270662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.270719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.270776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.270826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.270878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.270932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.270984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.271035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.271091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.271145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.271199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.271256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.271320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.271377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.271433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.271488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.271542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.271595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.272233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.272297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.272342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.272396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.272450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.272506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.272560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.272612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.272667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.272713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.272772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.272829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.272884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.272938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.272994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.273051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.273103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.273153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.273206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.273261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.273317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.273371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.273425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.273488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.273542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 [2024-10-30 13:52:36.273603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:38.034 13:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:38.034 13:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.422 13:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.422 13:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:39.422 13:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:39.422 true 00:05:39.422 13:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:39.422 13:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.684 13:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.945 13:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:39.945 13:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:39.945 true 00:05:39.945 13:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:39.945 13:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.331 13:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.331 13:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:41.331 13:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:41.591 true 00:05:41.591 13:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:41.591 13:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.533 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.533 13:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.533 13:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:42.533 13:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:42.795 true 00:05:42.795 13:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:42.795 13:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.795 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.057 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:43.057 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:43.317 true 00:05:43.317 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:43.317 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.578 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.578 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:43.578 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:43.839 true 00:05:43.839 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:43.839 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.101 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.101 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:44.101 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:44.362 true 00:05:44.362 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:44.362 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.745 Initializing NVMe Controllers 00:05:45.745 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:45.745 Controller IO queue size 128, less than required. 00:05:45.745 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:45.745 Controller IO queue size 128, less than required. 00:05:45.745 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:45.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:45.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:45.746 Initialization complete. Launching workers. 00:05:45.746 ======================================================== 00:05:45.746 Latency(us) 00:05:45.746 Device Information : IOPS MiB/s Average min max 00:05:45.746 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2452.70 1.20 29166.33 1236.92 1093136.38 00:05:45.746 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16494.73 8.05 7760.08 1143.07 344648.65 00:05:45.746 ======================================================== 00:05:45.746 Total : 18947.43 9.25 10531.07 1143.07 1093136.38 00:05:45.746 00:05:45.746 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.746 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:45.746 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:45.746 true 00:05:45.746 13:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 810082 00:05:45.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (810082) - No such process 00:05:45.746 13:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 810082 00:05:45.746 13:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.007 13:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:46.269 13:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:46.269 13:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:46.269 13:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:46.269 13:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.269 13:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:46.269 null0 00:05:46.531 13:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.531 13:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.531 13:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:46.531 null1 00:05:46.531 13:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.531 13:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.531 13:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:46.793 null2 00:05:46.793 13:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.793 13:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.793 13:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:47.054 null3 00:05:47.054 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:47.054 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:47.054 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:47.054 null4 00:05:47.054 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:47.054 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:47.054 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:47.314 null5 00:05:47.315 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:47.315 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:47.315 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:47.576 null6 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:47.576 null7 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.576 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 816631 816634 816637 816639 816642 816645 816649 816651 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.577 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:47.839 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:47.839 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:47.839 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:47.839 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.839 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:47.839 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:47.839 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:47.839 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:48.101 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:48.362 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:48.362 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.362 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:48.362 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:48.362 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:48.362 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:48.362 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.362 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.362 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:48.362 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.362 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.362 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:48.362 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.362 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.362 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:48.362 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.362 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.362 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:48.362 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.362 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.362 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:48.362 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.362 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.362 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:48.624 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.624 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.624 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:48.624 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:48.624 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.624 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.624 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:48.624 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:48.624 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:48.624 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:48.624 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:48.624 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:48.624 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.624 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.624 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.624 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:48.886 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:48.886 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.886 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.886 13:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:48.886 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.886 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.886 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:48.886 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.886 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.886 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:48.886 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:48.886 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.886 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.886 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:48.886 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.886 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.886 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:48.887 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.887 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.887 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:48.887 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:48.887 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.887 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.887 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.149 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:49.410 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.410 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.410 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:49.410 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.410 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.410 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:49.410 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:49.410 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:49.410 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.410 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.410 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:49.410 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:49.411 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:49.411 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:49.411 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:49.411 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.411 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.411 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:49.411 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:49.672 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:49.933 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.933 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.933 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:49.933 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.933 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:49.933 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:49.933 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.933 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.933 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:49.933 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:49.933 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.933 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.933 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:49.933 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.933 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.933 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:49.933 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:49.933 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.933 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.933 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.933 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.933 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:49.933 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:49.933 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.933 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.933 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:50.194 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:50.194 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.194 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.194 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:50.194 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:50.194 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.194 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.194 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:50.194 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.194 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:50.194 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:50.194 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:50.194 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.194 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.194 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:50.455 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:50.455 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.455 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.455 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:50.455 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.455 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.455 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:50.455 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:50.455 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.455 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.456 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:50.456 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.456 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.456 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:50.456 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.456 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.456 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:50.456 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.456 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:50.456 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.456 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:50.456 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.716 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:50.716 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.716 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.716 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:50.716 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:50.716 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:50.716 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:50.716 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.716 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.716 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:50.716 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:50.716 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.716 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.716 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:50.716 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.716 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.716 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:50.717 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:50.717 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.717 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.717 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:50.717 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:50.717 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.717 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.717 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:50.717 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.717 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.717 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.717 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.717 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:50.717 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:50.978 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.978 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.978 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.978 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:50.978 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:50.978 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:50.978 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.978 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.978 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:50.978 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:50.978 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:50.978 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:50.978 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.978 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.978 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:51.241 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.241 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.241 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:51.241 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:51.241 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.241 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.241 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:51.241 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.241 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.241 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:51.241 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:51.241 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.241 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.241 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.241 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.241 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:51.241 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.241 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.241 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.241 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:51.241 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:51.241 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.241 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.505 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:51.505 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:51.505 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.505 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.505 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.505 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.505 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.505 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.505 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.505 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.505 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.505 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.505 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:51.505 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:51.505 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:51.505 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:51.505 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:51.505 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:51.506 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:51.506 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:51.506 rmmod nvme_tcp 00:05:51.506 rmmod nvme_fabrics 00:05:51.506 rmmod nvme_keyring 00:05:51.766 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:51.766 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:51.766 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:51.767 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 809484 ']' 00:05:51.767 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 809484 00:05:51.767 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 809484 ']' 00:05:51.767 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 809484 00:05:51.767 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:51.767 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.767 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 809484 00:05:51.767 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:51.767 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:51.767 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 809484' 00:05:51.767 killing process with pid 809484 00:05:51.767 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 809484 00:05:51.767 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 809484 00:05:51.767 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:51.767 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:51.767 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:51.767 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:51.767 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:51.767 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:51.767 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:51.767 13:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:51.767 13:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:51.767 13:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:51.767 13:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:51.767 13:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:54.409 00:05:54.409 real 0m49.407s 00:05:54.409 user 3m14.059s 00:05:54.409 sys 0m16.673s 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:54.409 ************************************ 00:05:54.409 END TEST nvmf_ns_hotplug_stress 00:05:54.409 ************************************ 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:54.409 ************************************ 00:05:54.409 START TEST nvmf_delete_subsystem 00:05:54.409 ************************************ 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:54.409 * Looking for test storage... 00:05:54.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:54.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.409 --rc genhtml_branch_coverage=1 00:05:54.409 --rc genhtml_function_coverage=1 00:05:54.409 --rc genhtml_legend=1 00:05:54.409 --rc geninfo_all_blocks=1 00:05:54.409 --rc geninfo_unexecuted_blocks=1 00:05:54.409 00:05:54.409 ' 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:54.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.409 --rc genhtml_branch_coverage=1 00:05:54.409 --rc genhtml_function_coverage=1 00:05:54.409 --rc genhtml_legend=1 00:05:54.409 --rc geninfo_all_blocks=1 00:05:54.409 --rc geninfo_unexecuted_blocks=1 00:05:54.409 00:05:54.409 ' 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:54.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.409 --rc genhtml_branch_coverage=1 00:05:54.409 --rc genhtml_function_coverage=1 00:05:54.409 --rc genhtml_legend=1 00:05:54.409 --rc geninfo_all_blocks=1 00:05:54.409 --rc geninfo_unexecuted_blocks=1 00:05:54.409 00:05:54.409 ' 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:54.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.409 --rc genhtml_branch_coverage=1 00:05:54.409 --rc genhtml_function_coverage=1 00:05:54.409 --rc genhtml_legend=1 00:05:54.409 --rc geninfo_all_blocks=1 00:05:54.409 --rc geninfo_unexecuted_blocks=1 00:05:54.409 00:05:54.409 ' 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:54.409 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:54.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:54.410 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:02.725 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:02.725 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:02.725 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:02.725 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:02.725 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:02.725 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:02.725 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:02.725 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:02.725 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:02.725 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:02.725 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:02.726 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:02.726 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:02.726 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:02.726 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:02.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:02.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:06:02.726 00:06:02.726 --- 10.0.0.2 ping statistics --- 00:06:02.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:02.726 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:02.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:02.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:06:02.726 00:06:02.726 --- 10.0.0.1 ping statistics --- 00:06:02.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:02.726 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=821968 00:06:02.726 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 821968 00:06:02.727 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:02.727 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 821968 ']' 00:06:02.727 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.727 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.727 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.727 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.727 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:02.727 [2024-10-30 13:52:59.963296] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:06:02.727 [2024-10-30 13:52:59.963362] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:02.727 [2024-10-30 13:53:00.066090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.727 [2024-10-30 13:53:00.121078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:02.727 [2024-10-30 13:53:00.121146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:02.727 [2024-10-30 13:53:00.121158] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:02.727 [2024-10-30 13:53:00.121168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:02.727 [2024-10-30 13:53:00.121177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:02.727 [2024-10-30 13:53:00.123006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.727 [2024-10-30 13:53:00.123158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:02.727 [2024-10-30 13:53:00.857003] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:02.727 [2024-10-30 13:53:00.881332] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:02.727 NULL1 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:02.727 Delay0 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=822127 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:02.727 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:02.727 [2024-10-30 13:53:01.008282] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:04.641 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:04.641 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.641 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:05.212 Write completed with error (sct=0, sc=8) 00:06:05.212 Read completed with error (sct=0, sc=8) 00:06:05.212 Read completed with error (sct=0, sc=8) 00:06:05.212 starting I/O failed: -6 00:06:05.212 Read completed with error (sct=0, sc=8) 00:06:05.212 Read completed with error (sct=0, sc=8) 00:06:05.212 Read completed with error (sct=0, sc=8) 00:06:05.212 Write completed with error (sct=0, sc=8) 00:06:05.212 starting I/O failed: -6 00:06:05.212 Read completed with error (sct=0, sc=8) 00:06:05.212 Write completed with error (sct=0, sc=8) 00:06:05.212 Read completed with error (sct=0, sc=8) 00:06:05.212 Write completed with error (sct=0, sc=8) 00:06:05.212 starting I/O failed: -6 00:06:05.212 Read completed with error (sct=0, sc=8) 00:06:05.212 Write completed with error (sct=0, sc=8) 00:06:05.212 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 starting I/O failed: -6 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 starting I/O failed: -6 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 starting I/O failed: -6 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 starting I/O failed: -6 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 starting I/O failed: -6 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 starting I/O failed: -6 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 starting I/O failed: -6 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 starting I/O failed: -6 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 [2024-10-30 13:53:03.215538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138a410 is same with the state(6) to be set 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 starting I/O failed: -6 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 starting I/O failed: -6 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 starting I/O failed: -6 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 starting I/O failed: -6 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 starting I/O failed: -6 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 starting I/O failed: -6 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 starting I/O failed: -6 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 starting I/O failed: -6 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 starting I/O failed: -6 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 starting I/O failed: -6 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 [2024-10-30 13:53:03.219754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc2dc000c00 is same with the state(6) to be set 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Write completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:05.213 Read completed with error (sct=0, sc=8) 00:06:06.154 [2024-10-30 13:53:04.189119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138baf0 is same with the state(6) to be set 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 [2024-10-30 13:53:04.218935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138a5f0 is same with the state(6) to be set 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 [2024-10-30 13:53:04.219620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ab00 is same with the state(6) to be set 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 [2024-10-30 13:53:04.221761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc2dc00cfe0 is same with the state(6) to be set 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 Write completed with error (sct=0, sc=8) 00:06:06.154 Read completed with error (sct=0, sc=8) 00:06:06.154 [2024-10-30 13:53:04.222085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc2dc00d640 is same with the state(6) to be set 00:06:06.154 Initializing NVMe Controllers 00:06:06.154 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:06.154 Controller IO queue size 128, less than required. 00:06:06.154 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:06.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:06.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:06.154 Initialization complete. Launching workers. 00:06:06.154 ======================================================== 00:06:06.154 Latency(us) 00:06:06.154 Device Information : IOPS MiB/s Average min max 00:06:06.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.65 0.08 890790.07 410.43 1008113.61 00:06:06.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.20 0.08 913140.04 343.54 1011729.04 00:06:06.154 ======================================================== 00:06:06.154 Total : 333.85 0.16 901648.62 343.54 1011729.04 00:06:06.154 00:06:06.154 [2024-10-30 13:53:04.222523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138baf0 (9): Bad file descriptor 00:06:06.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:06.155 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.155 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:06.155 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 822127 00:06:06.155 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 822127 00:06:06.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (822127) - No such process 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 822127 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 822127 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 822127 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:06.726 [2024-10-30 13:53:04.752254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=822837 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 822837 00:06:06.726 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:06.726 [2024-10-30 13:53:04.850714] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:06.986 13:53:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:06.986 13:53:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 822837 00:06:06.986 13:53:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:07.556 13:53:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:07.556 13:53:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 822837 00:06:07.556 13:53:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:08.125 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:08.125 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 822837 00:06:08.125 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:08.695 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:08.695 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 822837 00:06:08.695 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:09.267 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:09.267 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 822837 00:06:09.267 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:09.527 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:09.527 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 822837 00:06:09.527 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:09.788 Initializing NVMe Controllers 00:06:09.788 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:09.788 Controller IO queue size 128, less than required. 00:06:09.788 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:09.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:09.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:09.788 Initialization complete. Launching workers. 00:06:09.788 ======================================================== 00:06:09.788 Latency(us) 00:06:09.788 Device Information : IOPS MiB/s Average min max 00:06:09.788 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001965.36 1000133.38 1041236.95 00:06:09.788 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003008.95 1000341.33 1007889.78 00:06:09.788 ======================================================== 00:06:09.788 Total : 256.00 0.12 1002487.15 1000133.38 1041236.95 00:06:09.788 00:06:10.050 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:10.050 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 822837 00:06:10.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (822837) - No such process 00:06:10.050 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 822837 00:06:10.050 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:10.050 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:10.050 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:10.050 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:10.050 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:10.050 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:10.050 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:10.050 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:10.050 rmmod nvme_tcp 00:06:10.050 rmmod nvme_fabrics 00:06:10.050 rmmod nvme_keyring 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 821968 ']' 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 821968 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 821968 ']' 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 821968 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 821968 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 821968' 00:06:10.312 killing process with pid 821968 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 821968 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 821968 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:10.312 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:12.857 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:12.857 00:06:12.857 real 0m18.464s 00:06:12.857 user 0m31.119s 00:06:12.857 sys 0m6.808s 00:06:12.857 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.857 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.857 ************************************ 00:06:12.857 END TEST nvmf_delete_subsystem 00:06:12.857 ************************************ 00:06:12.857 13:53:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:12.857 13:53:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:12.857 13:53:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.857 13:53:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:12.857 ************************************ 00:06:12.857 START TEST nvmf_host_management 00:06:12.857 ************************************ 00:06:12.857 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:12.857 * Looking for test storage... 00:06:12.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:12.857 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.857 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.857 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.857 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.857 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.857 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.857 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.857 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.858 --rc genhtml_branch_coverage=1 00:06:12.858 --rc genhtml_function_coverage=1 00:06:12.858 --rc genhtml_legend=1 00:06:12.858 --rc geninfo_all_blocks=1 00:06:12.858 --rc geninfo_unexecuted_blocks=1 00:06:12.858 00:06:12.858 ' 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.858 --rc genhtml_branch_coverage=1 00:06:12.858 --rc genhtml_function_coverage=1 00:06:12.858 --rc genhtml_legend=1 00:06:12.858 --rc geninfo_all_blocks=1 00:06:12.858 --rc geninfo_unexecuted_blocks=1 00:06:12.858 00:06:12.858 ' 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.858 --rc genhtml_branch_coverage=1 00:06:12.858 --rc genhtml_function_coverage=1 00:06:12.858 --rc genhtml_legend=1 00:06:12.858 --rc geninfo_all_blocks=1 00:06:12.858 --rc geninfo_unexecuted_blocks=1 00:06:12.858 00:06:12.858 ' 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.858 --rc genhtml_branch_coverage=1 00:06:12.858 --rc genhtml_function_coverage=1 00:06:12.858 --rc genhtml_legend=1 00:06:12.858 --rc geninfo_all_blocks=1 00:06:12.858 --rc geninfo_unexecuted_blocks=1 00:06:12.858 00:06:12.858 ' 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.858 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:12.859 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:12.859 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:12.859 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:12.859 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:12.859 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:12.859 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:12.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:12.859 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:12.859 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:12.859 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:12.859 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:12.859 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:12.859 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:12.859 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:12.859 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:12.859 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:12.859 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:12.859 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:12.859 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:12.859 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:12.859 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:12.859 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:12.859 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:12.859 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:12.859 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:21.063 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:21.063 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:21.063 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:21.063 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:21.063 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:21.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:21.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:06:21.064 00:06:21.064 --- 10.0.0.2 ping statistics --- 00:06:21.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:21.064 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:21.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:21.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:06:21.064 00:06:21.064 --- 10.0.0.1 ping statistics --- 00:06:21.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:21.064 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=827836 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 827836 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 827836 ']' 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.064 13:53:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.064 [2024-10-30 13:53:18.511456] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:06:21.064 [2024-10-30 13:53:18.511522] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:21.064 [2024-10-30 13:53:18.609917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:21.064 [2024-10-30 13:53:18.663393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:21.064 [2024-10-30 13:53:18.663445] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:21.064 [2024-10-30 13:53:18.663454] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:21.064 [2024-10-30 13:53:18.663461] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:21.064 [2024-10-30 13:53:18.663468] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:21.064 [2024-10-30 13:53:18.665505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.064 [2024-10-30 13:53:18.665666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.064 [2024-10-30 13:53:18.665815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.064 [2024-10-30 13:53:18.665815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:21.064 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.064 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:21.064 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:21.064 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:21.064 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.326 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:21.326 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:21.326 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.326 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.326 [2024-10-30 13:53:19.389969] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.326 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.326 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:21.326 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:21.326 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.326 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:21.326 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:21.326 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:21.326 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.326 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.326 Malloc0 00:06:21.326 [2024-10-30 13:53:19.472326] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:21.326 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.326 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:21.326 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:21.326 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.326 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=828207 00:06:21.327 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 828207 /var/tmp/bdevperf.sock 00:06:21.327 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 828207 ']' 00:06:21.327 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:21.327 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.327 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:21.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:21.327 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:21.327 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.327 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:21.327 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.327 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:21.327 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:21.327 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:21.327 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:21.327 { 00:06:21.327 "params": { 00:06:21.327 "name": "Nvme$subsystem", 00:06:21.327 "trtype": "$TEST_TRANSPORT", 00:06:21.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:21.327 "adrfam": "ipv4", 00:06:21.327 "trsvcid": "$NVMF_PORT", 00:06:21.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:21.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:21.327 "hdgst": ${hdgst:-false}, 00:06:21.327 "ddgst": ${ddgst:-false} 00:06:21.327 }, 00:06:21.327 "method": "bdev_nvme_attach_controller" 00:06:21.327 } 00:06:21.327 EOF 00:06:21.327 )") 00:06:21.327 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:21.327 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:21.327 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:21.327 13:53:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:21.327 "params": { 00:06:21.327 "name": "Nvme0", 00:06:21.327 "trtype": "tcp", 00:06:21.327 "traddr": "10.0.0.2", 00:06:21.327 "adrfam": "ipv4", 00:06:21.327 "trsvcid": "4420", 00:06:21.327 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:21.327 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:21.327 "hdgst": false, 00:06:21.327 "ddgst": false 00:06:21.327 }, 00:06:21.327 "method": "bdev_nvme_attach_controller" 00:06:21.327 }' 00:06:21.327 [2024-10-30 13:53:19.581900] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:06:21.327 [2024-10-30 13:53:19.581969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid828207 ] 00:06:21.587 [2024-10-30 13:53:19.676323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.587 [2024-10-30 13:53:19.729154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.848 Running I/O for 10 seconds... 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.422 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:22.422 [2024-10-30 13:53:20.485558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:22.422 [2024-10-30 13:53:20.485612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.422 [2024-10-30 13:53:20.485623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:22.422 [2024-10-30 13:53:20.485632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.422 [2024-10-30 13:53:20.485640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:22.422 [2024-10-30 13:53:20.485648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.422 [2024-10-30 13:53:20.485656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:22.422 [2024-10-30 13:53:20.485664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.422 [2024-10-30 13:53:20.485672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a200 is same with the state(6) to be set 00:06:22.422 [2024-10-30 13:53:20.486302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.422 [2024-10-30 13:53:20.486319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.422 [2024-10-30 13:53:20.486338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.422 [2024-10-30 13:53:20.486346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.422 [2024-10-30 13:53:20.486357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.422 [2024-10-30 13:53:20.486365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.422 [2024-10-30 13:53:20.486374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.422 [2024-10-30 13:53:20.486382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.422 [2024-10-30 13:53:20.486399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.422 [2024-10-30 13:53:20.486407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.422 [2024-10-30 13:53:20.486417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.422 [2024-10-30 13:53:20.486425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.422 [2024-10-30 13:53:20.486434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.422 [2024-10-30 13:53:20.486442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.422 [2024-10-30 13:53:20.486451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.422 [2024-10-30 13:53:20.486459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.422 [2024-10-30 13:53:20.486469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.422 [2024-10-30 13:53:20.486477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.422 [2024-10-30 13:53:20.486486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.422 [2024-10-30 13:53:20.486494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.422 [2024-10-30 13:53:20.486503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.422 [2024-10-30 13:53:20.486511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.422 [2024-10-30 13:53:20.486521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.422 [2024-10-30 13:53:20.486528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.422 [2024-10-30 13:53:20.486538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.422 [2024-10-30 13:53:20.486545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.422 [2024-10-30 13:53:20.486554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.422 [2024-10-30 13:53:20.486562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.422 [2024-10-30 13:53:20.486571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.422 [2024-10-30 13:53:20.486579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.422 [2024-10-30 13:53:20.486589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.486596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.486606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.486615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.486625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.486633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.486643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.486650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.486660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.486667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.486678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.486686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.486695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.486702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.486712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.486719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.486729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.486736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.486766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.486775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.486784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.486792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.486802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.486809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.486818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.486826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.486836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.486844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.486856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.486863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.486872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.486880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.486890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.486897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.486907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.486915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.486924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.486931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.486941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.486948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.486958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.486966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.486975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.486982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.486992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.487000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.487011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.487019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.487028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.487036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.487046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.487053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.487063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.487072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.487081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.487089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.487099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.487106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.487116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.487123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.487132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.487139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.487149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.487157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.487166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.487174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.487184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.423 [2024-10-30 13:53:20.487191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.423 [2024-10-30 13:53:20.487201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.424 [2024-10-30 13:53:20.487208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.424 [2024-10-30 13:53:20.487217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.424 [2024-10-30 13:53:20.487225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.424 [2024-10-30 13:53:20.487234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.424 [2024-10-30 13:53:20.487241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.424 [2024-10-30 13:53:20.487251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.424 [2024-10-30 13:53:20.487259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.424 [2024-10-30 13:53:20.487268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.424 [2024-10-30 13:53:20.487275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.424 [2024-10-30 13:53:20.487288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.424 [2024-10-30 13:53:20.487296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.424 [2024-10-30 13:53:20.487306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.424 [2024-10-30 13:53:20.487313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.424 [2024-10-30 13:53:20.487323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.424 [2024-10-30 13:53:20.487331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.424 [2024-10-30 13:53:20.487340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.424 [2024-10-30 13:53:20.487348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.424 [2024-10-30 13:53:20.487357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.424 [2024-10-30 13:53:20.487365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.424 [2024-10-30 13:53:20.487374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.424 [2024-10-30 13:53:20.487381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.424 [2024-10-30 13:53:20.487391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.424 [2024-10-30 13:53:20.487398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.424 [2024-10-30 13:53:20.487408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.424 [2024-10-30 13:53:20.487416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.424 [2024-10-30 13:53:20.487425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.424 [2024-10-30 13:53:20.487432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.424 [2024-10-30 13:53:20.487441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.424 [2024-10-30 13:53:20.487449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.424 [2024-10-30 13:53:20.488708] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:22.424 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.424 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:22.424 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.424 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:22.424 task offset: 90112 on job bdev=Nvme0n1 fails 00:06:22.424 00:06:22.424 Latency(us) 00:06:22.424 [2024-10-30T12:53:20.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:22.424 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:22.424 Job: Nvme0n1 ended in about 0.47 seconds with error 00:06:22.424 Verification LBA range: start 0x0 length 0x400 00:06:22.424 Nvme0n1 : 0.47 1501.94 93.87 136.54 0.00 37892.14 1672.53 34734.08 00:06:22.424 [2024-10-30T12:53:20.723Z] =================================================================================================================== 00:06:22.424 [2024-10-30T12:53:20.723Z] Total : 1501.94 93.87 136.54 0.00 37892.14 1672.53 34734.08 00:06:22.424 [2024-10-30 13:53:20.490875] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:22.424 [2024-10-30 13:53:20.490907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8a200 (9): Bad file descriptor 00:06:22.424 [2024-10-30 13:53:20.495562] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:06:22.424 [2024-10-30 13:53:20.495647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:06:22.424 [2024-10-30 13:53:20.495673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:22.424 [2024-10-30 13:53:20.495687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:06:22.424 [2024-10-30 13:53:20.495697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:06:22.424 [2024-10-30 13:53:20.495705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:06:22.424 [2024-10-30 13:53:20.495712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe8a200 00:06:22.424 [2024-10-30 13:53:20.495732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8a200 (9): Bad file descriptor 00:06:22.424 [2024-10-30 13:53:20.495750] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:06:22.424 [2024-10-30 13:53:20.495759] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:06:22.424 [2024-10-30 13:53:20.495768] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:06:22.424 [2024-10-30 13:53:20.495784] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:06:22.424 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.424 13:53:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:23.367 13:53:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 828207 00:06:23.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (828207) - No such process 00:06:23.367 13:53:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:23.367 13:53:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:23.367 13:53:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:23.367 13:53:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:23.367 13:53:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:23.367 13:53:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:23.367 13:53:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:23.367 13:53:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:23.367 { 00:06:23.367 "params": { 00:06:23.367 "name": "Nvme$subsystem", 00:06:23.367 "trtype": "$TEST_TRANSPORT", 00:06:23.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:23.367 "adrfam": "ipv4", 00:06:23.367 "trsvcid": "$NVMF_PORT", 00:06:23.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:23.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:23.367 "hdgst": ${hdgst:-false}, 00:06:23.367 "ddgst": ${ddgst:-false} 00:06:23.367 }, 00:06:23.367 "method": "bdev_nvme_attach_controller" 00:06:23.367 } 00:06:23.367 EOF 00:06:23.367 )") 00:06:23.367 13:53:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:23.368 13:53:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:23.368 13:53:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:23.368 13:53:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:23.368 "params": { 00:06:23.368 "name": "Nvme0", 00:06:23.368 "trtype": "tcp", 00:06:23.368 "traddr": "10.0.0.2", 00:06:23.368 "adrfam": "ipv4", 00:06:23.368 "trsvcid": "4420", 00:06:23.368 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:23.368 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:23.368 "hdgst": false, 00:06:23.368 "ddgst": false 00:06:23.368 }, 00:06:23.368 "method": "bdev_nvme_attach_controller" 00:06:23.368 }' 00:06:23.368 [2024-10-30 13:53:21.563572] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:06:23.368 [2024-10-30 13:53:21.563627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid828566 ] 00:06:23.368 [2024-10-30 13:53:21.652619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.629 [2024-10-30 13:53:21.688181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.889 Running I/O for 1 seconds... 00:06:24.831 1856.00 IOPS, 116.00 MiB/s 00:06:24.831 Latency(us) 00:06:24.831 [2024-10-30T12:53:23.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:24.831 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:24.831 Verification LBA range: start 0x0 length 0x400 00:06:24.831 Nvme0n1 : 1.03 1864.23 116.51 0.00 0.00 33689.56 5597.87 30583.47 00:06:24.831 [2024-10-30T12:53:23.130Z] =================================================================================================================== 00:06:24.831 [2024-10-30T12:53:23.130Z] Total : 1864.23 116.51 0.00 0.00 33689.56 5597.87 30583.47 00:06:25.092 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:25.092 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:25.092 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:25.092 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:25.092 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:25.092 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:25.092 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:25.092 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:25.092 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:25.092 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:25.092 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:25.092 rmmod nvme_tcp 00:06:25.092 rmmod nvme_fabrics 00:06:25.092 rmmod nvme_keyring 00:06:25.092 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:25.092 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:25.092 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:25.092 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 827836 ']' 00:06:25.092 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 827836 00:06:25.092 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 827836 ']' 00:06:25.093 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 827836 00:06:25.093 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:25.093 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.093 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 827836 00:06:25.093 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:25.093 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:25.093 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 827836' 00:06:25.093 killing process with pid 827836 00:06:25.093 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 827836 00:06:25.093 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 827836 00:06:25.093 [2024-10-30 13:53:23.383959] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:25.354 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:25.354 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:25.354 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:25.354 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:25.354 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:25.354 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:25.354 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:25.354 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:25.354 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:25.354 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.354 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:25.354 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.271 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:27.272 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:27.272 00:06:27.272 real 0m14.785s 00:06:27.272 user 0m23.844s 00:06:27.272 sys 0m6.714s 00:06:27.272 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.272 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:27.272 ************************************ 00:06:27.272 END TEST nvmf_host_management 00:06:27.272 ************************************ 00:06:27.272 13:53:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:27.272 13:53:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:27.272 13:53:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.272 13:53:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:27.534 ************************************ 00:06:27.534 START TEST nvmf_lvol 00:06:27.534 ************************************ 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:27.534 * Looking for test storage... 00:06:27.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:27.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.534 --rc genhtml_branch_coverage=1 00:06:27.534 --rc genhtml_function_coverage=1 00:06:27.534 --rc genhtml_legend=1 00:06:27.534 --rc geninfo_all_blocks=1 00:06:27.534 --rc geninfo_unexecuted_blocks=1 00:06:27.534 00:06:27.534 ' 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:27.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.534 --rc genhtml_branch_coverage=1 00:06:27.534 --rc genhtml_function_coverage=1 00:06:27.534 --rc genhtml_legend=1 00:06:27.534 --rc geninfo_all_blocks=1 00:06:27.534 --rc geninfo_unexecuted_blocks=1 00:06:27.534 00:06:27.534 ' 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:27.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.534 --rc genhtml_branch_coverage=1 00:06:27.534 --rc genhtml_function_coverage=1 00:06:27.534 --rc genhtml_legend=1 00:06:27.534 --rc geninfo_all_blocks=1 00:06:27.534 --rc geninfo_unexecuted_blocks=1 00:06:27.534 00:06:27.534 ' 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:27.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.534 --rc genhtml_branch_coverage=1 00:06:27.534 --rc genhtml_function_coverage=1 00:06:27.534 --rc genhtml_legend=1 00:06:27.534 --rc geninfo_all_blocks=1 00:06:27.534 --rc geninfo_unexecuted_blocks=1 00:06:27.534 00:06:27.534 ' 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.534 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:27.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:27.535 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:35.682 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:35.682 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:35.682 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:35.683 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:35.683 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:35.683 13:53:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:35.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:35.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:06:35.683 00:06:35.683 --- 10.0.0.2 ping statistics --- 00:06:35.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.683 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:35.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:35.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:06:35.683 00:06:35.683 --- 10.0.0.1 ping statistics --- 00:06:35.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.683 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=833245 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 833245 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 833245 ']' 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.683 13:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:35.683 [2024-10-30 13:53:33.367941] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:06:35.683 [2024-10-30 13:53:33.368007] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:35.683 [2024-10-30 13:53:33.470374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.683 [2024-10-30 13:53:33.521690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:35.683 [2024-10-30 13:53:33.521755] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:35.683 [2024-10-30 13:53:33.521767] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:35.683 [2024-10-30 13:53:33.521778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:35.683 [2024-10-30 13:53:33.521786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:35.683 [2024-10-30 13:53:33.523899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.683 [2024-10-30 13:53:33.524126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.683 [2024-10-30 13:53:33.524129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.945 13:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.945 13:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:35.945 13:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:35.945 13:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:35.945 13:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:35.945 13:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:35.945 13:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:36.207 [2024-10-30 13:53:34.400016] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.207 13:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:36.469 13:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:36.469 13:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:36.730 13:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:36.730 13:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:36.991 13:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:37.252 13:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8bc3ace4-836f-4e95-a580-633cb6e665d4 00:06:37.253 13:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8bc3ace4-836f-4e95-a580-633cb6e665d4 lvol 20 00:06:37.253 13:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=684c4b20-891f-42bf-af89-95e3eadc34f1 00:06:37.253 13:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:37.514 13:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 684c4b20-891f-42bf-af89-95e3eadc34f1 00:06:37.776 13:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:37.776 [2024-10-30 13:53:36.060570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:38.039 13:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:38.039 13:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=833763 00:06:38.039 13:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:38.039 13:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:38.982 13:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 684c4b20-891f-42bf-af89-95e3eadc34f1 MY_SNAPSHOT 00:06:39.244 13:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=54865fad-793a-4f72-b435-6d8236b38c9f 00:06:39.244 13:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 684c4b20-891f-42bf-af89-95e3eadc34f1 30 00:06:39.506 13:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 54865fad-793a-4f72-b435-6d8236b38c9f MY_CLONE 00:06:39.773 13:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c41c947f-50b1-4388-a764-d7c7943a28ef 00:06:39.773 13:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c41c947f-50b1-4388-a764-d7c7943a28ef 00:06:40.036 13:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 833763 00:06:50.033 Initializing NVMe Controllers 00:06:50.033 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:50.033 Controller IO queue size 128, less than required. 00:06:50.033 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:50.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:50.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:50.033 Initialization complete. Launching workers. 00:06:50.033 ======================================================== 00:06:50.033 Latency(us) 00:06:50.033 Device Information : IOPS MiB/s Average min max 00:06:50.033 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17072.20 66.69 7500.24 1285.70 61168.32 00:06:50.033 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16086.54 62.84 7957.77 2695.62 62775.00 00:06:50.033 ======================================================== 00:06:50.033 Total : 33158.74 129.53 7722.20 1285.70 62775.00 00:06:50.033 00:06:50.033 13:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:50.033 13:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 684c4b20-891f-42bf-af89-95e3eadc34f1 00:06:50.033 13:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8bc3ace4-836f-4e95-a580-633cb6e665d4 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:50.033 rmmod nvme_tcp 00:06:50.033 rmmod nvme_fabrics 00:06:50.033 rmmod nvme_keyring 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 833245 ']' 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 833245 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 833245 ']' 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 833245 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 833245 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 833245' 00:06:50.033 killing process with pid 833245 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 833245 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 833245 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:50.033 13:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.420 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:51.420 00:06:51.420 real 0m23.923s 00:06:51.420 user 1m4.715s 00:06:51.420 sys 0m8.661s 00:06:51.420 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.420 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:51.420 ************************************ 00:06:51.420 END TEST nvmf_lvol 00:06:51.420 ************************************ 00:06:51.420 13:53:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:51.420 13:53:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:51.420 13:53:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.420 13:53:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:51.420 ************************************ 00:06:51.420 START TEST nvmf_lvs_grow 00:06:51.420 ************************************ 00:06:51.420 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:51.420 * Looking for test storage... 00:06:51.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:51.420 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:51.420 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:06:51.420 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:51.681 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:51.681 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.681 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.681 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.681 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.681 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.681 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.681 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.681 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:51.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.682 --rc genhtml_branch_coverage=1 00:06:51.682 --rc genhtml_function_coverage=1 00:06:51.682 --rc genhtml_legend=1 00:06:51.682 --rc geninfo_all_blocks=1 00:06:51.682 --rc geninfo_unexecuted_blocks=1 00:06:51.682 00:06:51.682 ' 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:51.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.682 --rc genhtml_branch_coverage=1 00:06:51.682 --rc genhtml_function_coverage=1 00:06:51.682 --rc genhtml_legend=1 00:06:51.682 --rc geninfo_all_blocks=1 00:06:51.682 --rc geninfo_unexecuted_blocks=1 00:06:51.682 00:06:51.682 ' 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:51.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.682 --rc genhtml_branch_coverage=1 00:06:51.682 --rc genhtml_function_coverage=1 00:06:51.682 --rc genhtml_legend=1 00:06:51.682 --rc geninfo_all_blocks=1 00:06:51.682 --rc geninfo_unexecuted_blocks=1 00:06:51.682 00:06:51.682 ' 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:51.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.682 --rc genhtml_branch_coverage=1 00:06:51.682 --rc genhtml_function_coverage=1 00:06:51.682 --rc genhtml_legend=1 00:06:51.682 --rc geninfo_all_blocks=1 00:06:51.682 --rc geninfo_unexecuted_blocks=1 00:06:51.682 00:06:51.682 ' 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:51.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:51.682 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:59.822 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:59.822 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:59.822 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:59.822 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:59.822 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:59.822 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:59.822 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:59.822 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:59.822 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:59.822 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:59.822 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:59.822 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:59.822 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:59.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:59.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:06:59.822 00:06:59.822 --- 10.0.0.2 ping statistics --- 00:06:59.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.822 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:06:59.822 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:59.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:59.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:06:59.822 00:06:59.822 --- 10.0.0.1 ping statistics --- 00:06:59.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.823 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:06:59.823 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:59.823 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:59.823 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:59.823 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:59.823 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:59.823 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:59.823 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:59.823 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:59.823 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:59.823 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:59.823 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:59.823 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:59.823 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:59.823 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=840313 00:06:59.823 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 840313 00:06:59.823 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:59.823 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 840313 ']' 00:06:59.823 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.823 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.823 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.823 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.823 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:59.823 [2024-10-30 13:53:57.323510] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:06:59.823 [2024-10-30 13:53:57.323579] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.823 [2024-10-30 13:53:57.422918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.823 [2024-10-30 13:53:57.474384] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:59.823 [2024-10-30 13:53:57.474441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:59.823 [2024-10-30 13:53:57.474460] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:59.823 [2024-10-30 13:53:57.474471] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:59.823 [2024-10-30 13:53:57.474478] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:59.823 [2024-10-30 13:53:57.475359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.085 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.085 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:00.085 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:00.085 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:00.085 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:00.086 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:00.086 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:00.086 [2024-10-30 13:53:58.337318] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:00.086 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:00.086 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.086 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.086 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:00.348 ************************************ 00:07:00.348 START TEST lvs_grow_clean 00:07:00.348 ************************************ 00:07:00.348 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:00.348 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:00.348 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:00.348 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:00.348 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:00.348 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:00.348 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:00.348 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:00.348 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:00.348 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:00.608 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:00.608 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:00.608 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b38133a4-0803-4ede-bc01-a87d486b03b4 00:07:00.608 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b38133a4-0803-4ede-bc01-a87d486b03b4 00:07:00.608 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:00.869 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:00.869 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:00.869 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b38133a4-0803-4ede-bc01-a87d486b03b4 lvol 150 00:07:01.130 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f264402c-9cbb-41b5-ae15-2fd689c81a46 00:07:01.130 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:01.130 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:01.130 [2024-10-30 13:53:59.383287] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:01.130 [2024-10-30 13:53:59.383369] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:01.130 true 00:07:01.130 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b38133a4-0803-4ede-bc01-a87d486b03b4 00:07:01.130 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:01.392 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:01.392 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:01.653 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f264402c-9cbb-41b5-ae15-2fd689c81a46 00:07:01.915 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:01.915 [2024-10-30 13:54:00.125666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:01.915 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:02.176 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=840889 00:07:02.176 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:02.176 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 840889 /var/tmp/bdevperf.sock 00:07:02.176 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:02.176 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 840889 ']' 00:07:02.176 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:02.176 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.176 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:02.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:02.176 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.176 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:02.176 [2024-10-30 13:54:00.388545] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:07:02.176 [2024-10-30 13:54:00.388618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid840889 ] 00:07:02.176 [2024-10-30 13:54:00.455428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.437 [2024-10-30 13:54:00.501763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.437 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.437 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:02.437 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:02.697 Nvme0n1 00:07:02.697 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:02.958 [ 00:07:02.958 { 00:07:02.958 "name": "Nvme0n1", 00:07:02.958 "aliases": [ 00:07:02.958 "f264402c-9cbb-41b5-ae15-2fd689c81a46" 00:07:02.958 ], 00:07:02.958 "product_name": "NVMe disk", 00:07:02.958 "block_size": 4096, 00:07:02.958 "num_blocks": 38912, 00:07:02.958 "uuid": "f264402c-9cbb-41b5-ae15-2fd689c81a46", 00:07:02.958 "numa_id": 0, 00:07:02.958 "assigned_rate_limits": { 00:07:02.958 "rw_ios_per_sec": 0, 00:07:02.958 "rw_mbytes_per_sec": 0, 00:07:02.958 "r_mbytes_per_sec": 0, 00:07:02.959 "w_mbytes_per_sec": 0 00:07:02.959 }, 00:07:02.959 "claimed": false, 00:07:02.959 "zoned": false, 00:07:02.959 "supported_io_types": { 00:07:02.959 "read": true, 00:07:02.959 "write": true, 00:07:02.959 "unmap": true, 00:07:02.959 "flush": true, 00:07:02.959 "reset": true, 00:07:02.959 "nvme_admin": true, 00:07:02.959 "nvme_io": true, 00:07:02.959 "nvme_io_md": false, 00:07:02.959 "write_zeroes": true, 00:07:02.959 "zcopy": false, 00:07:02.959 "get_zone_info": false, 00:07:02.959 "zone_management": false, 00:07:02.959 "zone_append": false, 00:07:02.959 "compare": true, 00:07:02.959 "compare_and_write": true, 00:07:02.959 "abort": true, 00:07:02.959 "seek_hole": false, 00:07:02.959 "seek_data": false, 00:07:02.959 "copy": true, 00:07:02.959 "nvme_iov_md": false 00:07:02.959 }, 00:07:02.959 "memory_domains": [ 00:07:02.959 { 00:07:02.959 "dma_device_id": "system", 00:07:02.959 "dma_device_type": 1 00:07:02.959 } 00:07:02.959 ], 00:07:02.959 "driver_specific": { 00:07:02.959 "nvme": [ 00:07:02.959 { 00:07:02.959 "trid": { 00:07:02.959 "trtype": "TCP", 00:07:02.959 "adrfam": "IPv4", 00:07:02.959 "traddr": "10.0.0.2", 00:07:02.959 "trsvcid": "4420", 00:07:02.959 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:02.959 }, 00:07:02.959 "ctrlr_data": { 00:07:02.959 "cntlid": 1, 00:07:02.959 "vendor_id": "0x8086", 00:07:02.959 "model_number": "SPDK bdev Controller", 00:07:02.959 "serial_number": "SPDK0", 00:07:02.959 "firmware_revision": "25.01", 00:07:02.959 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:02.959 "oacs": { 00:07:02.959 "security": 0, 00:07:02.959 "format": 0, 00:07:02.959 "firmware": 0, 00:07:02.959 "ns_manage": 0 00:07:02.959 }, 00:07:02.959 "multi_ctrlr": true, 00:07:02.959 "ana_reporting": false 00:07:02.959 }, 00:07:02.959 "vs": { 00:07:02.959 "nvme_version": "1.3" 00:07:02.959 }, 00:07:02.959 "ns_data": { 00:07:02.959 "id": 1, 00:07:02.959 "can_share": true 00:07:02.959 } 00:07:02.959 } 00:07:02.959 ], 00:07:02.959 "mp_policy": "active_passive" 00:07:02.959 } 00:07:02.959 } 00:07:02.959 ] 00:07:02.959 13:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=841098 00:07:02.959 13:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:02.959 13:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:02.959 Running I/O for 10 seconds... 00:07:03.901 Latency(us) 00:07:03.901 [2024-10-30T12:54:02.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:03.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.901 Nvme0n1 : 1.00 18302.00 71.49 0.00 0.00 0.00 0.00 0.00 00:07:03.901 [2024-10-30T12:54:02.200Z] =================================================================================================================== 00:07:03.901 [2024-10-30T12:54:02.200Z] Total : 18302.00 71.49 0.00 0.00 0.00 0.00 0.00 00:07:03.901 00:07:04.838 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b38133a4-0803-4ede-bc01-a87d486b03b4 00:07:05.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.097 Nvme0n1 : 2.00 21853.00 85.36 0.00 0.00 0.00 0.00 0.00 00:07:05.097 [2024-10-30T12:54:03.396Z] =================================================================================================================== 00:07:05.097 [2024-10-30T12:54:03.396Z] Total : 21853.00 85.36 0.00 0.00 0.00 0.00 0.00 00:07:05.097 00:07:05.097 true 00:07:05.097 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b38133a4-0803-4ede-bc01-a87d486b03b4 00:07:05.097 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:05.356 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:05.356 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:05.356 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 841098 00:07:05.926 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.926 Nvme0n1 : 3.00 23059.00 90.07 0.00 0.00 0.00 0.00 0.00 00:07:05.926 [2024-10-30T12:54:04.225Z] =================================================================================================================== 00:07:05.926 [2024-10-30T12:54:04.225Z] Total : 23059.00 90.07 0.00 0.00 0.00 0.00 0.00 00:07:05.926 00:07:06.865 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.865 Nvme0n1 : 4.00 23677.75 92.49 0.00 0.00 0.00 0.00 0.00 00:07:06.865 [2024-10-30T12:54:05.164Z] =================================================================================================================== 00:07:06.865 [2024-10-30T12:54:05.164Z] Total : 23677.75 92.49 0.00 0.00 0.00 0.00 0.00 00:07:06.865 00:07:08.244 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.244 Nvme0n1 : 5.00 24062.00 93.99 0.00 0.00 0.00 0.00 0.00 00:07:08.244 [2024-10-30T12:54:06.543Z] =================================================================================================================== 00:07:08.244 [2024-10-30T12:54:06.543Z] Total : 24062.00 93.99 0.00 0.00 0.00 0.00 0.00 00:07:08.244 00:07:09.185 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.185 Nvme0n1 : 6.00 24318.00 94.99 0.00 0.00 0.00 0.00 0.00 00:07:09.185 [2024-10-30T12:54:07.484Z] =================================================================================================================== 00:07:09.185 [2024-10-30T12:54:07.484Z] Total : 24318.00 94.99 0.00 0.00 0.00 0.00 0.00 00:07:09.185 00:07:10.126 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.126 Nvme0n1 : 7.00 24501.14 95.71 0.00 0.00 0.00 0.00 0.00 00:07:10.126 [2024-10-30T12:54:08.425Z] =================================================================================================================== 00:07:10.126 [2024-10-30T12:54:08.425Z] Total : 24501.14 95.71 0.00 0.00 0.00 0.00 0.00 00:07:10.126 00:07:11.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.071 Nvme0n1 : 8.00 24638.50 96.24 0.00 0.00 0.00 0.00 0.00 00:07:11.072 [2024-10-30T12:54:09.371Z] =================================================================================================================== 00:07:11.072 [2024-10-30T12:54:09.371Z] Total : 24638.50 96.24 0.00 0.00 0.00 0.00 0.00 00:07:11.072 00:07:12.013 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.013 Nvme0n1 : 9.00 24744.78 96.66 0.00 0.00 0.00 0.00 0.00 00:07:12.013 [2024-10-30T12:54:10.312Z] =================================================================================================================== 00:07:12.013 [2024-10-30T12:54:10.312Z] Total : 24744.78 96.66 0.00 0.00 0.00 0.00 0.00 00:07:12.013 00:07:12.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.954 Nvme0n1 : 10.00 24827.60 96.98 0.00 0.00 0.00 0.00 0.00 00:07:12.954 [2024-10-30T12:54:11.253Z] =================================================================================================================== 00:07:12.954 [2024-10-30T12:54:11.253Z] Total : 24827.60 96.98 0.00 0.00 0.00 0.00 0.00 00:07:12.954 00:07:12.954 00:07:12.954 Latency(us) 00:07:12.954 [2024-10-30T12:54:11.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.954 Nvme0n1 : 10.00 24832.22 97.00 0.00 0.00 5151.54 3072.00 18240.85 00:07:12.954 [2024-10-30T12:54:11.253Z] =================================================================================================================== 00:07:12.954 [2024-10-30T12:54:11.253Z] Total : 24832.22 97.00 0.00 0.00 5151.54 3072.00 18240.85 00:07:12.954 { 00:07:12.954 "results": [ 00:07:12.954 { 00:07:12.954 "job": "Nvme0n1", 00:07:12.954 "core_mask": "0x2", 00:07:12.954 "workload": "randwrite", 00:07:12.954 "status": "finished", 00:07:12.954 "queue_depth": 128, 00:07:12.954 "io_size": 4096, 00:07:12.954 "runtime": 10.003293, 00:07:12.954 "iops": 24832.22274904874, 00:07:12.954 "mibps": 97.00087011347163, 00:07:12.954 "io_failed": 0, 00:07:12.954 "io_timeout": 0, 00:07:12.954 "avg_latency_us": 5151.539516593935, 00:07:12.954 "min_latency_us": 3072.0, 00:07:12.954 "max_latency_us": 18240.853333333333 00:07:12.954 } 00:07:12.954 ], 00:07:12.954 "core_count": 1 00:07:12.954 } 00:07:12.954 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 840889 00:07:12.954 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 840889 ']' 00:07:12.954 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 840889 00:07:12.954 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:12.954 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.954 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 840889 00:07:13.215 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:13.215 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:13.215 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 840889' 00:07:13.215 killing process with pid 840889 00:07:13.215 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 840889 00:07:13.215 Received shutdown signal, test time was about 10.000000 seconds 00:07:13.215 00:07:13.215 Latency(us) 00:07:13.215 [2024-10-30T12:54:11.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:13.215 [2024-10-30T12:54:11.514Z] =================================================================================================================== 00:07:13.215 [2024-10-30T12:54:11.514Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:13.215 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 840889 00:07:13.215 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:13.475 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:13.476 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b38133a4-0803-4ede-bc01-a87d486b03b4 00:07:13.476 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:13.737 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:13.737 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:13.737 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:13.737 [2024-10-30 13:54:12.017275] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:13.999 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b38133a4-0803-4ede-bc01-a87d486b03b4 00:07:13.999 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:13.999 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b38133a4-0803-4ede-bc01-a87d486b03b4 00:07:13.999 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:13.999 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.999 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:13.999 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.999 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:13.999 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.999 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:13.999 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:13.999 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b38133a4-0803-4ede-bc01-a87d486b03b4 00:07:13.999 request: 00:07:13.999 { 00:07:13.999 "uuid": "b38133a4-0803-4ede-bc01-a87d486b03b4", 00:07:13.999 "method": "bdev_lvol_get_lvstores", 00:07:13.999 "req_id": 1 00:07:13.999 } 00:07:13.999 Got JSON-RPC error response 00:07:13.999 response: 00:07:13.999 { 00:07:13.999 "code": -19, 00:07:13.999 "message": "No such device" 00:07:13.999 } 00:07:13.999 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:13.999 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:13.999 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:13.999 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:13.999 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:14.261 aio_bdev 00:07:14.261 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f264402c-9cbb-41b5-ae15-2fd689c81a46 00:07:14.261 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=f264402c-9cbb-41b5-ae15-2fd689c81a46 00:07:14.261 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:14.261 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:14.261 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:14.261 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:14.261 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:14.522 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f264402c-9cbb-41b5-ae15-2fd689c81a46 -t 2000 00:07:14.522 [ 00:07:14.522 { 00:07:14.522 "name": "f264402c-9cbb-41b5-ae15-2fd689c81a46", 00:07:14.522 "aliases": [ 00:07:14.522 "lvs/lvol" 00:07:14.522 ], 00:07:14.522 "product_name": "Logical Volume", 00:07:14.522 "block_size": 4096, 00:07:14.522 "num_blocks": 38912, 00:07:14.522 "uuid": "f264402c-9cbb-41b5-ae15-2fd689c81a46", 00:07:14.522 "assigned_rate_limits": { 00:07:14.522 "rw_ios_per_sec": 0, 00:07:14.522 "rw_mbytes_per_sec": 0, 00:07:14.522 "r_mbytes_per_sec": 0, 00:07:14.522 "w_mbytes_per_sec": 0 00:07:14.522 }, 00:07:14.522 "claimed": false, 00:07:14.522 "zoned": false, 00:07:14.522 "supported_io_types": { 00:07:14.522 "read": true, 00:07:14.522 "write": true, 00:07:14.522 "unmap": true, 00:07:14.522 "flush": false, 00:07:14.522 "reset": true, 00:07:14.522 "nvme_admin": false, 00:07:14.522 "nvme_io": false, 00:07:14.522 "nvme_io_md": false, 00:07:14.522 "write_zeroes": true, 00:07:14.522 "zcopy": false, 00:07:14.522 "get_zone_info": false, 00:07:14.522 "zone_management": false, 00:07:14.522 "zone_append": false, 00:07:14.522 "compare": false, 00:07:14.522 "compare_and_write": false, 00:07:14.522 "abort": false, 00:07:14.522 "seek_hole": true, 00:07:14.522 "seek_data": true, 00:07:14.522 "copy": false, 00:07:14.522 "nvme_iov_md": false 00:07:14.522 }, 00:07:14.522 "driver_specific": { 00:07:14.522 "lvol": { 00:07:14.522 "lvol_store_uuid": "b38133a4-0803-4ede-bc01-a87d486b03b4", 00:07:14.522 "base_bdev": "aio_bdev", 00:07:14.522 "thin_provision": false, 00:07:14.522 "num_allocated_clusters": 38, 00:07:14.522 "snapshot": false, 00:07:14.522 "clone": false, 00:07:14.522 "esnap_clone": false 00:07:14.522 } 00:07:14.522 } 00:07:14.522 } 00:07:14.522 ] 00:07:14.522 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:14.522 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b38133a4-0803-4ede-bc01-a87d486b03b4 00:07:14.522 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:14.783 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:14.783 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b38133a4-0803-4ede-bc01-a87d486b03b4 00:07:14.783 13:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:14.783 13:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:14.783 13:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f264402c-9cbb-41b5-ae15-2fd689c81a46 00:07:15.045 13:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b38133a4-0803-4ede-bc01-a87d486b03b4 00:07:15.307 13:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:15.307 13:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:15.307 00:07:15.307 real 0m15.171s 00:07:15.307 user 0m14.813s 00:07:15.307 sys 0m1.391s 00:07:15.307 13:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.307 13:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:15.307 ************************************ 00:07:15.307 END TEST lvs_grow_clean 00:07:15.307 ************************************ 00:07:15.568 13:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:15.568 13:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:15.568 13:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.568 13:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:15.568 ************************************ 00:07:15.568 START TEST lvs_grow_dirty 00:07:15.568 ************************************ 00:07:15.568 13:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:15.568 13:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:15.568 13:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:15.568 13:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:15.568 13:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:15.568 13:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:15.568 13:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:15.568 13:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:15.568 13:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:15.568 13:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:15.829 13:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:15.829 13:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:15.829 13:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a40ee66c-569a-4aff-a566-a4e760c52408 00:07:15.829 13:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a40ee66c-569a-4aff-a566-a4e760c52408 00:07:15.829 13:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:16.090 13:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:16.090 13:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:16.090 13:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a40ee66c-569a-4aff-a566-a4e760c52408 lvol 150 00:07:16.090 13:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1d7c4232-689e-4ad1-8c32-db404dde12e1 00:07:16.090 13:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:16.351 13:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:16.351 [2024-10-30 13:54:14.544082] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:16.351 [2024-10-30 13:54:14.544129] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:16.351 true 00:07:16.351 13:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a40ee66c-569a-4aff-a566-a4e760c52408 00:07:16.351 13:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:16.611 13:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:16.611 13:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:16.611 13:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1d7c4232-689e-4ad1-8c32-db404dde12e1 00:07:16.872 13:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:17.133 [2024-10-30 13:54:15.197965] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.133 13:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:17.134 13:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=844387 00:07:17.134 13:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:17.134 13:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:17.134 13:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 844387 /var/tmp/bdevperf.sock 00:07:17.134 13:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 844387 ']' 00:07:17.134 13:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:17.134 13:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.134 13:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:17.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:17.134 13:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.134 13:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:17.395 [2024-10-30 13:54:15.440494] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:07:17.395 [2024-10-30 13:54:15.440548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid844387 ] 00:07:17.395 [2024-10-30 13:54:15.523678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.395 [2024-10-30 13:54:15.553546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.967 13:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.967 13:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:17.967 13:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:18.539 Nvme0n1 00:07:18.539 13:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:18.539 [ 00:07:18.539 { 00:07:18.539 "name": "Nvme0n1", 00:07:18.539 "aliases": [ 00:07:18.539 "1d7c4232-689e-4ad1-8c32-db404dde12e1" 00:07:18.539 ], 00:07:18.539 "product_name": "NVMe disk", 00:07:18.539 "block_size": 4096, 00:07:18.539 "num_blocks": 38912, 00:07:18.539 "uuid": "1d7c4232-689e-4ad1-8c32-db404dde12e1", 00:07:18.539 "numa_id": 0, 00:07:18.539 "assigned_rate_limits": { 00:07:18.539 "rw_ios_per_sec": 0, 00:07:18.539 "rw_mbytes_per_sec": 0, 00:07:18.539 "r_mbytes_per_sec": 0, 00:07:18.539 "w_mbytes_per_sec": 0 00:07:18.539 }, 00:07:18.539 "claimed": false, 00:07:18.539 "zoned": false, 00:07:18.539 "supported_io_types": { 00:07:18.539 "read": true, 00:07:18.539 "write": true, 00:07:18.539 "unmap": true, 00:07:18.539 "flush": true, 00:07:18.539 "reset": true, 00:07:18.539 "nvme_admin": true, 00:07:18.539 "nvme_io": true, 00:07:18.539 "nvme_io_md": false, 00:07:18.539 "write_zeroes": true, 00:07:18.539 "zcopy": false, 00:07:18.539 "get_zone_info": false, 00:07:18.539 "zone_management": false, 00:07:18.539 "zone_append": false, 00:07:18.539 "compare": true, 00:07:18.539 "compare_and_write": true, 00:07:18.539 "abort": true, 00:07:18.539 "seek_hole": false, 00:07:18.539 "seek_data": false, 00:07:18.539 "copy": true, 00:07:18.539 "nvme_iov_md": false 00:07:18.539 }, 00:07:18.539 "memory_domains": [ 00:07:18.539 { 00:07:18.539 "dma_device_id": "system", 00:07:18.539 "dma_device_type": 1 00:07:18.539 } 00:07:18.539 ], 00:07:18.539 "driver_specific": { 00:07:18.539 "nvme": [ 00:07:18.539 { 00:07:18.539 "trid": { 00:07:18.539 "trtype": "TCP", 00:07:18.539 "adrfam": "IPv4", 00:07:18.539 "traddr": "10.0.0.2", 00:07:18.539 "trsvcid": "4420", 00:07:18.539 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:18.539 }, 00:07:18.539 "ctrlr_data": { 00:07:18.539 "cntlid": 1, 00:07:18.539 "vendor_id": "0x8086", 00:07:18.539 "model_number": "SPDK bdev Controller", 00:07:18.539 "serial_number": "SPDK0", 00:07:18.539 "firmware_revision": "25.01", 00:07:18.539 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:18.539 "oacs": { 00:07:18.539 "security": 0, 00:07:18.539 "format": 0, 00:07:18.539 "firmware": 0, 00:07:18.539 "ns_manage": 0 00:07:18.539 }, 00:07:18.539 "multi_ctrlr": true, 00:07:18.539 "ana_reporting": false 00:07:18.539 }, 00:07:18.539 "vs": { 00:07:18.539 "nvme_version": "1.3" 00:07:18.539 }, 00:07:18.539 "ns_data": { 00:07:18.539 "id": 1, 00:07:18.539 "can_share": true 00:07:18.539 } 00:07:18.539 } 00:07:18.539 ], 00:07:18.539 "mp_policy": "active_passive" 00:07:18.539 } 00:07:18.539 } 00:07:18.539 ] 00:07:18.539 13:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=844702 00:07:18.539 13:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:18.539 13:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:18.800 Running I/O for 10 seconds... 00:07:19.742 Latency(us) 00:07:19.742 [2024-10-30T12:54:18.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.742 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.742 Nvme0n1 : 1.00 25221.00 98.52 0.00 0.00 0.00 0.00 0.00 00:07:19.742 [2024-10-30T12:54:18.041Z] =================================================================================================================== 00:07:19.742 [2024-10-30T12:54:18.041Z] Total : 25221.00 98.52 0.00 0.00 0.00 0.00 0.00 00:07:19.742 00:07:20.686 13:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a40ee66c-569a-4aff-a566-a4e760c52408 00:07:20.686 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.686 Nvme0n1 : 2.00 25313.00 98.88 0.00 0.00 0.00 0.00 0.00 00:07:20.686 [2024-10-30T12:54:18.985Z] =================================================================================================================== 00:07:20.686 [2024-10-30T12:54:18.985Z] Total : 25313.00 98.88 0.00 0.00 0.00 0.00 0.00 00:07:20.686 00:07:20.946 true 00:07:20.946 13:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a40ee66c-569a-4aff-a566-a4e760c52408 00:07:20.946 13:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:20.946 13:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:20.946 13:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:20.946 13:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 844702 00:07:21.889 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.889 Nvme0n1 : 3.00 25364.67 99.08 0.00 0.00 0.00 0.00 0.00 00:07:21.889 [2024-10-30T12:54:20.188Z] =================================================================================================================== 00:07:21.889 [2024-10-30T12:54:20.188Z] Total : 25364.67 99.08 0.00 0.00 0.00 0.00 0.00 00:07:21.889 00:07:22.831 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.831 Nvme0n1 : 4.00 25406.50 99.24 0.00 0.00 0.00 0.00 0.00 00:07:22.831 [2024-10-30T12:54:21.130Z] =================================================================================================================== 00:07:22.831 [2024-10-30T12:54:21.130Z] Total : 25406.50 99.24 0.00 0.00 0.00 0.00 0.00 00:07:22.831 00:07:23.773 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.773 Nvme0n1 : 5.00 25432.60 99.35 0.00 0.00 0.00 0.00 0.00 00:07:23.773 [2024-10-30T12:54:22.072Z] =================================================================================================================== 00:07:23.773 [2024-10-30T12:54:22.072Z] Total : 25432.60 99.35 0.00 0.00 0.00 0.00 0.00 00:07:23.773 00:07:24.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.716 Nvme0n1 : 6.00 25460.33 99.45 0.00 0.00 0.00 0.00 0.00 00:07:24.716 [2024-10-30T12:54:23.015Z] =================================================================================================================== 00:07:24.716 [2024-10-30T12:54:23.015Z] Total : 25460.33 99.45 0.00 0.00 0.00 0.00 0.00 00:07:24.716 00:07:25.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.660 Nvme0n1 : 7.00 25482.29 99.54 0.00 0.00 0.00 0.00 0.00 00:07:25.660 [2024-10-30T12:54:23.959Z] =================================================================================================================== 00:07:25.660 [2024-10-30T12:54:23.959Z] Total : 25482.29 99.54 0.00 0.00 0.00 0.00 0.00 00:07:25.660 00:07:27.048 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.048 Nvme0n1 : 8.00 25494.62 99.59 0.00 0.00 0.00 0.00 0.00 00:07:27.048 [2024-10-30T12:54:25.347Z] =================================================================================================================== 00:07:27.048 [2024-10-30T12:54:25.347Z] Total : 25494.62 99.59 0.00 0.00 0.00 0.00 0.00 00:07:27.048 00:07:27.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.621 Nvme0n1 : 9.00 25513.11 99.66 0.00 0.00 0.00 0.00 0.00 00:07:27.621 [2024-10-30T12:54:25.920Z] =================================================================================================================== 00:07:27.621 [2024-10-30T12:54:25.920Z] Total : 25513.11 99.66 0.00 0.00 0.00 0.00 0.00 00:07:27.621 00:07:29.005 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.005 Nvme0n1 : 10.00 25521.50 99.69 0.00 0.00 0.00 0.00 0.00 00:07:29.005 [2024-10-30T12:54:27.304Z] =================================================================================================================== 00:07:29.005 [2024-10-30T12:54:27.304Z] Total : 25521.50 99.69 0.00 0.00 0.00 0.00 0.00 00:07:29.005 00:07:29.005 00:07:29.005 Latency(us) 00:07:29.005 [2024-10-30T12:54:27.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.005 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.005 Nvme0n1 : 10.00 25525.31 99.71 0.00 0.00 5011.76 1454.08 10321.92 00:07:29.005 [2024-10-30T12:54:27.305Z] =================================================================================================================== 00:07:29.006 [2024-10-30T12:54:27.305Z] Total : 25525.31 99.71 0.00 0.00 5011.76 1454.08 10321.92 00:07:29.006 { 00:07:29.006 "results": [ 00:07:29.006 { 00:07:29.006 "job": "Nvme0n1", 00:07:29.006 "core_mask": "0x2", 00:07:29.006 "workload": "randwrite", 00:07:29.006 "status": "finished", 00:07:29.006 "queue_depth": 128, 00:07:29.006 "io_size": 4096, 00:07:29.006 "runtime": 10.003521, 00:07:29.006 "iops": 25525.31253745556, 00:07:29.006 "mibps": 99.70825209943578, 00:07:29.006 "io_failed": 0, 00:07:29.006 "io_timeout": 0, 00:07:29.006 "avg_latency_us": 5011.757338690833, 00:07:29.006 "min_latency_us": 1454.08, 00:07:29.006 "max_latency_us": 10321.92 00:07:29.006 } 00:07:29.006 ], 00:07:29.006 "core_count": 1 00:07:29.006 } 00:07:29.006 13:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 844387 00:07:29.006 13:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 844387 ']' 00:07:29.006 13:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 844387 00:07:29.006 13:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:29.006 13:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.006 13:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 844387 00:07:29.006 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:29.006 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:29.006 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 844387' 00:07:29.006 killing process with pid 844387 00:07:29.006 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 844387 00:07:29.006 Received shutdown signal, test time was about 10.000000 seconds 00:07:29.006 00:07:29.006 Latency(us) 00:07:29.006 [2024-10-30T12:54:27.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.006 [2024-10-30T12:54:27.305Z] =================================================================================================================== 00:07:29.006 [2024-10-30T12:54:27.305Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:29.006 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 844387 00:07:29.006 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:29.006 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:29.266 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a40ee66c-569a-4aff-a566-a4e760c52408 00:07:29.266 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:29.527 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:29.527 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:29.527 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 840313 00:07:29.527 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 840313 00:07:29.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 840313 Killed "${NVMF_APP[@]}" "$@" 00:07:29.527 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:29.527 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:29.527 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:29.528 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:29.528 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:29.528 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=846975 00:07:29.528 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 846975 00:07:29.528 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:29.528 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 846975 ']' 00:07:29.528 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.528 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.528 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.528 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.528 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:29.528 [2024-10-30 13:54:27.752291] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:07:29.528 [2024-10-30 13:54:27.752351] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.788 [2024-10-30 13:54:27.844665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.788 [2024-10-30 13:54:27.882656] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.788 [2024-10-30 13:54:27.882697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.788 [2024-10-30 13:54:27.882706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.788 [2024-10-30 13:54:27.882713] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.788 [2024-10-30 13:54:27.882719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.788 [2024-10-30 13:54:27.883349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.361 13:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.361 13:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:30.361 13:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:30.361 13:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:30.361 13:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:30.361 13:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.361 13:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:30.620 [2024-10-30 13:54:28.744875] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:30.620 [2024-10-30 13:54:28.744963] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:30.620 [2024-10-30 13:54:28.744991] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:30.620 13:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:30.620 13:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1d7c4232-689e-4ad1-8c32-db404dde12e1 00:07:30.620 13:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1d7c4232-689e-4ad1-8c32-db404dde12e1 00:07:30.620 13:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:30.620 13:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:30.620 13:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:30.620 13:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:30.620 13:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:30.880 13:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1d7c4232-689e-4ad1-8c32-db404dde12e1 -t 2000 00:07:30.880 [ 00:07:30.880 { 00:07:30.880 "name": "1d7c4232-689e-4ad1-8c32-db404dde12e1", 00:07:30.880 "aliases": [ 00:07:30.880 "lvs/lvol" 00:07:30.880 ], 00:07:30.880 "product_name": "Logical Volume", 00:07:30.880 "block_size": 4096, 00:07:30.880 "num_blocks": 38912, 00:07:30.880 "uuid": "1d7c4232-689e-4ad1-8c32-db404dde12e1", 00:07:30.880 "assigned_rate_limits": { 00:07:30.880 "rw_ios_per_sec": 0, 00:07:30.880 "rw_mbytes_per_sec": 0, 00:07:30.880 "r_mbytes_per_sec": 0, 00:07:30.880 "w_mbytes_per_sec": 0 00:07:30.880 }, 00:07:30.880 "claimed": false, 00:07:30.880 "zoned": false, 00:07:30.880 "supported_io_types": { 00:07:30.880 "read": true, 00:07:30.880 "write": true, 00:07:30.880 "unmap": true, 00:07:30.880 "flush": false, 00:07:30.880 "reset": true, 00:07:30.880 "nvme_admin": false, 00:07:30.880 "nvme_io": false, 00:07:30.880 "nvme_io_md": false, 00:07:30.880 "write_zeroes": true, 00:07:30.880 "zcopy": false, 00:07:30.880 "get_zone_info": false, 00:07:30.880 "zone_management": false, 00:07:30.880 "zone_append": false, 00:07:30.880 "compare": false, 00:07:30.880 "compare_and_write": false, 00:07:30.880 "abort": false, 00:07:30.880 "seek_hole": true, 00:07:30.880 "seek_data": true, 00:07:30.880 "copy": false, 00:07:30.880 "nvme_iov_md": false 00:07:30.880 }, 00:07:30.880 "driver_specific": { 00:07:30.880 "lvol": { 00:07:30.880 "lvol_store_uuid": "a40ee66c-569a-4aff-a566-a4e760c52408", 00:07:30.880 "base_bdev": "aio_bdev", 00:07:30.880 "thin_provision": false, 00:07:30.880 "num_allocated_clusters": 38, 00:07:30.880 "snapshot": false, 00:07:30.880 "clone": false, 00:07:30.880 "esnap_clone": false 00:07:30.880 } 00:07:30.880 } 00:07:30.880 } 00:07:30.880 ] 00:07:30.880 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:30.880 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a40ee66c-569a-4aff-a566-a4e760c52408 00:07:30.880 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:31.140 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:31.140 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a40ee66c-569a-4aff-a566-a4e760c52408 00:07:31.140 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:31.140 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:31.140 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:31.399 [2024-10-30 13:54:29.589518] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:31.399 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a40ee66c-569a-4aff-a566-a4e760c52408 00:07:31.399 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:31.399 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a40ee66c-569a-4aff-a566-a4e760c52408 00:07:31.399 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.399 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.399 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.399 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.399 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.399 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.399 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.399 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:31.399 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a40ee66c-569a-4aff-a566-a4e760c52408 00:07:31.659 request: 00:07:31.659 { 00:07:31.659 "uuid": "a40ee66c-569a-4aff-a566-a4e760c52408", 00:07:31.659 "method": "bdev_lvol_get_lvstores", 00:07:31.659 "req_id": 1 00:07:31.659 } 00:07:31.659 Got JSON-RPC error response 00:07:31.659 response: 00:07:31.659 { 00:07:31.659 "code": -19, 00:07:31.659 "message": "No such device" 00:07:31.659 } 00:07:31.659 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:31.659 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:31.659 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:31.659 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:31.659 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:31.659 aio_bdev 00:07:31.918 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1d7c4232-689e-4ad1-8c32-db404dde12e1 00:07:31.918 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1d7c4232-689e-4ad1-8c32-db404dde12e1 00:07:31.918 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:31.918 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:31.918 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:31.918 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:31.918 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:31.918 13:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1d7c4232-689e-4ad1-8c32-db404dde12e1 -t 2000 00:07:32.178 [ 00:07:32.178 { 00:07:32.178 "name": "1d7c4232-689e-4ad1-8c32-db404dde12e1", 00:07:32.178 "aliases": [ 00:07:32.178 "lvs/lvol" 00:07:32.178 ], 00:07:32.178 "product_name": "Logical Volume", 00:07:32.178 "block_size": 4096, 00:07:32.178 "num_blocks": 38912, 00:07:32.178 "uuid": "1d7c4232-689e-4ad1-8c32-db404dde12e1", 00:07:32.178 "assigned_rate_limits": { 00:07:32.178 "rw_ios_per_sec": 0, 00:07:32.178 "rw_mbytes_per_sec": 0, 00:07:32.178 "r_mbytes_per_sec": 0, 00:07:32.178 "w_mbytes_per_sec": 0 00:07:32.178 }, 00:07:32.178 "claimed": false, 00:07:32.178 "zoned": false, 00:07:32.178 "supported_io_types": { 00:07:32.178 "read": true, 00:07:32.178 "write": true, 00:07:32.178 "unmap": true, 00:07:32.178 "flush": false, 00:07:32.178 "reset": true, 00:07:32.178 "nvme_admin": false, 00:07:32.178 "nvme_io": false, 00:07:32.178 "nvme_io_md": false, 00:07:32.178 "write_zeroes": true, 00:07:32.178 "zcopy": false, 00:07:32.178 "get_zone_info": false, 00:07:32.178 "zone_management": false, 00:07:32.178 "zone_append": false, 00:07:32.178 "compare": false, 00:07:32.178 "compare_and_write": false, 00:07:32.179 "abort": false, 00:07:32.179 "seek_hole": true, 00:07:32.179 "seek_data": true, 00:07:32.179 "copy": false, 00:07:32.179 "nvme_iov_md": false 00:07:32.179 }, 00:07:32.179 "driver_specific": { 00:07:32.179 "lvol": { 00:07:32.179 "lvol_store_uuid": "a40ee66c-569a-4aff-a566-a4e760c52408", 00:07:32.179 "base_bdev": "aio_bdev", 00:07:32.179 "thin_provision": false, 00:07:32.179 "num_allocated_clusters": 38, 00:07:32.179 "snapshot": false, 00:07:32.179 "clone": false, 00:07:32.179 "esnap_clone": false 00:07:32.179 } 00:07:32.179 } 00:07:32.179 } 00:07:32.179 ] 00:07:32.179 13:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:32.179 13:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a40ee66c-569a-4aff-a566-a4e760c52408 00:07:32.179 13:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:32.439 13:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:32.439 13:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a40ee66c-569a-4aff-a566-a4e760c52408 00:07:32.439 13:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:32.439 13:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:32.439 13:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1d7c4232-689e-4ad1-8c32-db404dde12e1 00:07:32.700 13:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a40ee66c-569a-4aff-a566-a4e760c52408 00:07:32.961 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:32.961 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:32.961 00:07:32.961 real 0m17.572s 00:07:32.961 user 0m45.274s 00:07:32.961 sys 0m2.960s 00:07:32.961 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.961 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:32.961 ************************************ 00:07:32.961 END TEST lvs_grow_dirty 00:07:32.961 ************************************ 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:33.221 nvmf_trace.0 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:33.221 rmmod nvme_tcp 00:07:33.221 rmmod nvme_fabrics 00:07:33.221 rmmod nvme_keyring 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 846975 ']' 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 846975 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 846975 ']' 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 846975 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 846975 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 846975' 00:07:33.221 killing process with pid 846975 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 846975 00:07:33.221 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 846975 00:07:33.482 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:33.482 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:33.482 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:33.482 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:33.482 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:33.482 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:33.482 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:33.482 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:33.482 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:33.482 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.482 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.482 13:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.397 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:35.397 00:07:35.397 real 0m44.079s 00:07:35.397 user 1m6.542s 00:07:35.397 sys 0m10.374s 00:07:35.397 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.397 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:35.397 ************************************ 00:07:35.397 END TEST nvmf_lvs_grow 00:07:35.397 ************************************ 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:35.659 ************************************ 00:07:35.659 START TEST nvmf_bdev_io_wait 00:07:35.659 ************************************ 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:35.659 * Looking for test storage... 00:07:35.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:35.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.659 --rc genhtml_branch_coverage=1 00:07:35.659 --rc genhtml_function_coverage=1 00:07:35.659 --rc genhtml_legend=1 00:07:35.659 --rc geninfo_all_blocks=1 00:07:35.659 --rc geninfo_unexecuted_blocks=1 00:07:35.659 00:07:35.659 ' 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:35.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.659 --rc genhtml_branch_coverage=1 00:07:35.659 --rc genhtml_function_coverage=1 00:07:35.659 --rc genhtml_legend=1 00:07:35.659 --rc geninfo_all_blocks=1 00:07:35.659 --rc geninfo_unexecuted_blocks=1 00:07:35.659 00:07:35.659 ' 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:35.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.659 --rc genhtml_branch_coverage=1 00:07:35.659 --rc genhtml_function_coverage=1 00:07:35.659 --rc genhtml_legend=1 00:07:35.659 --rc geninfo_all_blocks=1 00:07:35.659 --rc geninfo_unexecuted_blocks=1 00:07:35.659 00:07:35.659 ' 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:35.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.659 --rc genhtml_branch_coverage=1 00:07:35.659 --rc genhtml_function_coverage=1 00:07:35.659 --rc genhtml_legend=1 00:07:35.659 --rc geninfo_all_blocks=1 00:07:35.659 --rc geninfo_unexecuted_blocks=1 00:07:35.659 00:07:35.659 ' 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.659 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.921 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.921 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.921 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.921 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.921 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.921 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:35.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:35.922 13:54:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:44.067 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:44.067 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:44.068 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:44.068 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:44.068 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:44.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:07:44.068 00:07:44.068 --- 10.0.0.2 ping statistics --- 00:07:44.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.068 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:44.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:44.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:07:44.068 00:07:44.068 --- 10.0.0.1 ping statistics --- 00:07:44.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.068 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=852020 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 852020 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 852020 ']' 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.068 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.069 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.069 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.069 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.069 [2024-10-30 13:54:41.545406] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:07:44.069 [2024-10-30 13:54:41.545472] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.069 [2024-10-30 13:54:41.648726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.069 [2024-10-30 13:54:41.703285] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.069 [2024-10-30 13:54:41.703346] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.069 [2024-10-30 13:54:41.703358] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.069 [2024-10-30 13:54:41.703368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.069 [2024-10-30 13:54:41.703376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.069 [2024-10-30 13:54:41.705805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.069 [2024-10-30 13:54:41.706049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.069 [2024-10-30 13:54:41.706051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.069 [2024-10-30 13:54:41.705884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.332 [2024-10-30 13:54:42.497215] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.332 Malloc0 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.332 [2024-10-30 13:54:42.562585] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=852167 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=852169 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:44.332 { 00:07:44.332 "params": { 00:07:44.332 "name": "Nvme$subsystem", 00:07:44.332 "trtype": "$TEST_TRANSPORT", 00:07:44.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:44.332 "adrfam": "ipv4", 00:07:44.332 "trsvcid": "$NVMF_PORT", 00:07:44.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:44.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:44.332 "hdgst": ${hdgst:-false}, 00:07:44.332 "ddgst": ${ddgst:-false} 00:07:44.332 }, 00:07:44.332 "method": "bdev_nvme_attach_controller" 00:07:44.332 } 00:07:44.332 EOF 00:07:44.332 )") 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=852171 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:44.332 { 00:07:44.332 "params": { 00:07:44.332 "name": "Nvme$subsystem", 00:07:44.332 "trtype": "$TEST_TRANSPORT", 00:07:44.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:44.332 "adrfam": "ipv4", 00:07:44.332 "trsvcid": "$NVMF_PORT", 00:07:44.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:44.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:44.332 "hdgst": ${hdgst:-false}, 00:07:44.332 "ddgst": ${ddgst:-false} 00:07:44.332 }, 00:07:44.332 "method": "bdev_nvme_attach_controller" 00:07:44.332 } 00:07:44.332 EOF 00:07:44.332 )") 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=852174 00:07:44.332 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:44.333 { 00:07:44.333 "params": { 00:07:44.333 "name": "Nvme$subsystem", 00:07:44.333 "trtype": "$TEST_TRANSPORT", 00:07:44.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:44.333 "adrfam": "ipv4", 00:07:44.333 "trsvcid": "$NVMF_PORT", 00:07:44.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:44.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:44.333 "hdgst": ${hdgst:-false}, 00:07:44.333 "ddgst": ${ddgst:-false} 00:07:44.333 }, 00:07:44.333 "method": "bdev_nvme_attach_controller" 00:07:44.333 } 00:07:44.333 EOF 00:07:44.333 )") 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:44.333 { 00:07:44.333 "params": { 00:07:44.333 "name": "Nvme$subsystem", 00:07:44.333 "trtype": "$TEST_TRANSPORT", 00:07:44.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:44.333 "adrfam": "ipv4", 00:07:44.333 "trsvcid": "$NVMF_PORT", 00:07:44.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:44.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:44.333 "hdgst": ${hdgst:-false}, 00:07:44.333 "ddgst": ${ddgst:-false} 00:07:44.333 }, 00:07:44.333 "method": "bdev_nvme_attach_controller" 00:07:44.333 } 00:07:44.333 EOF 00:07:44.333 )") 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 852167 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:44.333 "params": { 00:07:44.333 "name": "Nvme1", 00:07:44.333 "trtype": "tcp", 00:07:44.333 "traddr": "10.0.0.2", 00:07:44.333 "adrfam": "ipv4", 00:07:44.333 "trsvcid": "4420", 00:07:44.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:44.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:44.333 "hdgst": false, 00:07:44.333 "ddgst": false 00:07:44.333 }, 00:07:44.333 "method": "bdev_nvme_attach_controller" 00:07:44.333 }' 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:44.333 "params": { 00:07:44.333 "name": "Nvme1", 00:07:44.333 "trtype": "tcp", 00:07:44.333 "traddr": "10.0.0.2", 00:07:44.333 "adrfam": "ipv4", 00:07:44.333 "trsvcid": "4420", 00:07:44.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:44.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:44.333 "hdgst": false, 00:07:44.333 "ddgst": false 00:07:44.333 }, 00:07:44.333 "method": "bdev_nvme_attach_controller" 00:07:44.333 }' 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:44.333 "params": { 00:07:44.333 "name": "Nvme1", 00:07:44.333 "trtype": "tcp", 00:07:44.333 "traddr": "10.0.0.2", 00:07:44.333 "adrfam": "ipv4", 00:07:44.333 "trsvcid": "4420", 00:07:44.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:44.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:44.333 "hdgst": false, 00:07:44.333 "ddgst": false 00:07:44.333 }, 00:07:44.333 "method": "bdev_nvme_attach_controller" 00:07:44.333 }' 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:44.333 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:44.333 "params": { 00:07:44.333 "name": "Nvme1", 00:07:44.333 "trtype": "tcp", 00:07:44.333 "traddr": "10.0.0.2", 00:07:44.333 "adrfam": "ipv4", 00:07:44.333 "trsvcid": "4420", 00:07:44.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:44.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:44.333 "hdgst": false, 00:07:44.333 "ddgst": false 00:07:44.333 }, 00:07:44.333 "method": "bdev_nvme_attach_controller" 00:07:44.333 }' 00:07:44.333 [2024-10-30 13:54:42.621930] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:07:44.333 [2024-10-30 13:54:42.622001] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:44.333 [2024-10-30 13:54:42.624324] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:07:44.333 [2024-10-30 13:54:42.624387] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:44.333 [2024-10-30 13:54:42.626253] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:07:44.333 [2024-10-30 13:54:42.626321] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:44.333 [2024-10-30 13:54:42.626396] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:07:44.333 [2024-10-30 13:54:42.626455] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:44.595 [2024-10-30 13:54:42.835735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.595 [2024-10-30 13:54:42.876452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:44.857 [2024-10-30 13:54:42.927341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.857 [2024-10-30 13:54:42.966892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:44.857 [2024-10-30 13:54:43.021422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.857 [2024-10-30 13:54:43.064468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:44.857 [2024-10-30 13:54:43.091237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.857 [2024-10-30 13:54:43.128845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:45.118 Running I/O for 1 seconds... 00:07:45.118 Running I/O for 1 seconds... 00:07:45.118 Running I/O for 1 seconds... 00:07:45.118 Running I/O for 1 seconds... 00:07:46.061 11073.00 IOPS, 43.25 MiB/s 00:07:46.061 Latency(us) 00:07:46.061 [2024-10-30T12:54:44.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.061 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:46.061 Nvme1n1 : 1.01 11113.87 43.41 0.00 0.00 11469.41 6580.91 18568.53 00:07:46.061 [2024-10-30T12:54:44.360Z] =================================================================================================================== 00:07:46.061 [2024-10-30T12:54:44.360Z] Total : 11113.87 43.41 0.00 0.00 11469.41 6580.91 18568.53 00:07:46.321 9413.00 IOPS, 36.77 MiB/s 00:07:46.321 Latency(us) 00:07:46.321 [2024-10-30T12:54:44.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.321 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:46.321 Nvme1n1 : 1.01 9481.02 37.04 0.00 0.00 13446.76 4860.59 20534.61 00:07:46.321 [2024-10-30T12:54:44.620Z] =================================================================================================================== 00:07:46.321 [2024-10-30T12:54:44.620Z] Total : 9481.02 37.04 0.00 0.00 13446.76 4860.59 20534.61 00:07:46.321 10507.00 IOPS, 41.04 MiB/s 00:07:46.321 Latency(us) 00:07:46.321 [2024-10-30T12:54:44.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.321 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:46.321 Nvme1n1 : 1.01 10594.50 41.38 0.00 0.00 12042.55 4560.21 22173.01 00:07:46.321 [2024-10-30T12:54:44.620Z] =================================================================================================================== 00:07:46.321 [2024-10-30T12:54:44.620Z] Total : 10594.50 41.38 0.00 0.00 12042.55 4560.21 22173.01 00:07:46.321 186432.00 IOPS, 728.25 MiB/s 00:07:46.321 Latency(us) 00:07:46.321 [2024-10-30T12:54:44.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.321 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:46.321 Nvme1n1 : 1.00 186059.88 726.80 0.00 0.00 684.15 305.49 1979.73 00:07:46.321 [2024-10-30T12:54:44.620Z] =================================================================================================================== 00:07:46.321 [2024-10-30T12:54:44.620Z] Total : 186059.88 726.80 0.00 0.00 684.15 305.49 1979.73 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 852169 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 852171 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 852174 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:46.321 rmmod nvme_tcp 00:07:46.321 rmmod nvme_fabrics 00:07:46.321 rmmod nvme_keyring 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 852020 ']' 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 852020 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 852020 ']' 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 852020 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.321 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 852020 00:07:46.582 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.582 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.582 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 852020' 00:07:46.582 killing process with pid 852020 00:07:46.582 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 852020 00:07:46.582 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 852020 00:07:46.582 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:46.582 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:46.582 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:46.582 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:46.582 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:46.582 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:46.582 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:46.582 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:46.582 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:46.582 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.582 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.582 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.127 13:54:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:49.127 00:07:49.127 real 0m13.144s 00:07:49.127 user 0m19.912s 00:07:49.127 sys 0m7.463s 00:07:49.127 13:54:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.127 13:54:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.127 ************************************ 00:07:49.127 END TEST nvmf_bdev_io_wait 00:07:49.127 ************************************ 00:07:49.127 13:54:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:49.127 13:54:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:49.127 13:54:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.127 13:54:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:49.127 ************************************ 00:07:49.127 START TEST nvmf_queue_depth 00:07:49.127 ************************************ 00:07:49.127 13:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:49.127 * Looking for test storage... 00:07:49.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:49.127 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:49.127 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:49.127 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:49.127 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:49.127 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.127 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.127 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.127 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.127 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.127 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.127 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:49.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.128 --rc genhtml_branch_coverage=1 00:07:49.128 --rc genhtml_function_coverage=1 00:07:49.128 --rc genhtml_legend=1 00:07:49.128 --rc geninfo_all_blocks=1 00:07:49.128 --rc geninfo_unexecuted_blocks=1 00:07:49.128 00:07:49.128 ' 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:49.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.128 --rc genhtml_branch_coverage=1 00:07:49.128 --rc genhtml_function_coverage=1 00:07:49.128 --rc genhtml_legend=1 00:07:49.128 --rc geninfo_all_blocks=1 00:07:49.128 --rc geninfo_unexecuted_blocks=1 00:07:49.128 00:07:49.128 ' 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:49.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.128 --rc genhtml_branch_coverage=1 00:07:49.128 --rc genhtml_function_coverage=1 00:07:49.128 --rc genhtml_legend=1 00:07:49.128 --rc geninfo_all_blocks=1 00:07:49.128 --rc geninfo_unexecuted_blocks=1 00:07:49.128 00:07:49.128 ' 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:49.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.128 --rc genhtml_branch_coverage=1 00:07:49.128 --rc genhtml_function_coverage=1 00:07:49.128 --rc genhtml_legend=1 00:07:49.128 --rc geninfo_all_blocks=1 00:07:49.128 --rc geninfo_unexecuted_blocks=1 00:07:49.128 00:07:49.128 ' 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:49.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:49.128 13:54:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:57.269 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:57.269 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:57.269 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:57.269 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:57.270 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:57.270 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:57.270 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:57.270 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:57.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:57.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:07:57.270 00:07:57.270 --- 10.0.0.2 ping statistics --- 00:07:57.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.270 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:57.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:57.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:07:57.270 00:07:57.270 --- 10.0.0.1 ping statistics --- 00:07:57.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.270 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:57.270 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:57.271 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:57.271 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:57.271 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:57.271 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=856867 00:07:57.271 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 856867 00:07:57.271 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:57.271 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 856867 ']' 00:07:57.271 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.271 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.271 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.271 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.271 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:57.271 [2024-10-30 13:54:54.733078] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:07:57.271 [2024-10-30 13:54:54.733170] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.271 [2024-10-30 13:54:54.835963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.271 [2024-10-30 13:54:54.886156] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:57.271 [2024-10-30 13:54:54.886208] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:57.271 [2024-10-30 13:54:54.886217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:57.271 [2024-10-30 13:54:54.886224] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:57.271 [2024-10-30 13:54:54.886230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:57.271 [2024-10-30 13:54:54.886984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.271 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.271 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:57.271 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:57.271 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:57.271 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:57.531 [2024-10-30 13:54:55.608024] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:57.531 Malloc0 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:57.531 [2024-10-30 13:54:55.669193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=857197 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 857197 /var/tmp/bdevperf.sock 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 857197 ']' 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:57.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.531 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:57.531 [2024-10-30 13:54:55.729513] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:07:57.531 [2024-10-30 13:54:55.729579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid857197 ] 00:07:57.531 [2024-10-30 13:54:55.821222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.791 [2024-10-30 13:54:55.874128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.363 13:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.363 13:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:58.363 13:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:58.363 13:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.363 13:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:58.363 NVMe0n1 00:07:58.363 13:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.363 13:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:58.624 Running I/O for 10 seconds... 00:08:00.507 9409.00 IOPS, 36.75 MiB/s [2024-10-30T12:54:59.748Z] 10531.00 IOPS, 41.14 MiB/s [2024-10-30T12:55:01.132Z] 10919.33 IOPS, 42.65 MiB/s [2024-10-30T12:55:02.074Z] 11128.25 IOPS, 43.47 MiB/s [2024-10-30T12:55:03.016Z] 11469.80 IOPS, 44.80 MiB/s [2024-10-30T12:55:03.957Z] 11765.00 IOPS, 45.96 MiB/s [2024-10-30T12:55:04.900Z] 11983.00 IOPS, 46.81 MiB/s [2024-10-30T12:55:05.842Z] 12139.75 IOPS, 47.42 MiB/s [2024-10-30T12:55:06.784Z] 12249.67 IOPS, 47.85 MiB/s [2024-10-30T12:55:07.044Z] 12364.70 IOPS, 48.30 MiB/s 00:08:08.745 Latency(us) 00:08:08.745 [2024-10-30T12:55:07.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.745 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:08.745 Verification LBA range: start 0x0 length 0x4000 00:08:08.745 NVMe0n1 : 10.07 12381.78 48.37 0.00 0.00 82381.17 25777.49 78643.20 00:08:08.745 [2024-10-30T12:55:07.044Z] =================================================================================================================== 00:08:08.745 [2024-10-30T12:55:07.044Z] Total : 12381.78 48.37 0.00 0.00 82381.17 25777.49 78643.20 00:08:08.745 { 00:08:08.745 "results": [ 00:08:08.745 { 00:08:08.745 "job": "NVMe0n1", 00:08:08.745 "core_mask": "0x1", 00:08:08.745 "workload": "verify", 00:08:08.745 "status": "finished", 00:08:08.745 "verify_range": { 00:08:08.745 "start": 0, 00:08:08.745 "length": 16384 00:08:08.745 }, 00:08:08.745 "queue_depth": 1024, 00:08:08.745 "io_size": 4096, 00:08:08.745 "runtime": 10.068908, 00:08:08.745 "iops": 12381.779632905575, 00:08:08.745 "mibps": 48.3663266910374, 00:08:08.745 "io_failed": 0, 00:08:08.745 "io_timeout": 0, 00:08:08.745 "avg_latency_us": 82381.17298040443, 00:08:08.745 "min_latency_us": 25777.493333333332, 00:08:08.745 "max_latency_us": 78643.2 00:08:08.745 } 00:08:08.745 ], 00:08:08.745 "core_count": 1 00:08:08.745 } 00:08:08.745 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 857197 00:08:08.745 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 857197 ']' 00:08:08.745 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 857197 00:08:08.745 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:08.745 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.745 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 857197 00:08:08.745 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.745 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.745 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 857197' 00:08:08.745 killing process with pid 857197 00:08:08.745 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 857197 00:08:08.745 Received shutdown signal, test time was about 10.000000 seconds 00:08:08.745 00:08:08.745 Latency(us) 00:08:08.745 [2024-10-30T12:55:07.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.745 [2024-10-30T12:55:07.044Z] =================================================================================================================== 00:08:08.745 [2024-10-30T12:55:07.044Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:08.745 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 857197 00:08:08.745 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:08.745 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:08.745 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:08.745 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:08.745 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:08.745 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:08.745 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:08.745 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:08.745 rmmod nvme_tcp 00:08:08.745 rmmod nvme_fabrics 00:08:09.006 rmmod nvme_keyring 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 856867 ']' 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 856867 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 856867 ']' 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 856867 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 856867 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 856867' 00:08:09.006 killing process with pid 856867 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 856867 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 856867 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.006 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:11.554 00:08:11.554 real 0m22.375s 00:08:11.554 user 0m25.673s 00:08:11.554 sys 0m7.000s 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:11.554 ************************************ 00:08:11.554 END TEST nvmf_queue_depth 00:08:11.554 ************************************ 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:11.554 ************************************ 00:08:11.554 START TEST nvmf_target_multipath 00:08:11.554 ************************************ 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:11.554 * Looking for test storage... 00:08:11.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:11.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.554 --rc genhtml_branch_coverage=1 00:08:11.554 --rc genhtml_function_coverage=1 00:08:11.554 --rc genhtml_legend=1 00:08:11.554 --rc geninfo_all_blocks=1 00:08:11.554 --rc geninfo_unexecuted_blocks=1 00:08:11.554 00:08:11.554 ' 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:11.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.554 --rc genhtml_branch_coverage=1 00:08:11.554 --rc genhtml_function_coverage=1 00:08:11.554 --rc genhtml_legend=1 00:08:11.554 --rc geninfo_all_blocks=1 00:08:11.554 --rc geninfo_unexecuted_blocks=1 00:08:11.554 00:08:11.554 ' 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:11.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.554 --rc genhtml_branch_coverage=1 00:08:11.554 --rc genhtml_function_coverage=1 00:08:11.554 --rc genhtml_legend=1 00:08:11.554 --rc geninfo_all_blocks=1 00:08:11.554 --rc geninfo_unexecuted_blocks=1 00:08:11.554 00:08:11.554 ' 00:08:11.554 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:11.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.554 --rc genhtml_branch_coverage=1 00:08:11.554 --rc genhtml_function_coverage=1 00:08:11.554 --rc genhtml_legend=1 00:08:11.555 --rc geninfo_all_blocks=1 00:08:11.555 --rc geninfo_unexecuted_blocks=1 00:08:11.555 00:08:11.555 ' 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:11.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:11.555 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:19.701 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:19.701 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:19.701 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:19.702 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:19.702 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.702 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:19.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:08:19.702 00:08:19.702 --- 10.0.0.2 ping statistics --- 00:08:19.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.702 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:08:19.702 00:08:19.702 --- 10.0.0.1 ping statistics --- 00:08:19.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.702 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:19.702 only one NIC for nvmf test 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:19.702 rmmod nvme_tcp 00:08:19.702 rmmod nvme_fabrics 00:08:19.702 rmmod nvme_keyring 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.702 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:21.089 00:08:21.089 real 0m9.949s 00:08:21.089 user 0m2.116s 00:08:21.089 sys 0m5.794s 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.089 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:21.089 ************************************ 00:08:21.089 END TEST nvmf_target_multipath 00:08:21.089 ************************************ 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:21.351 ************************************ 00:08:21.351 START TEST nvmf_zcopy 00:08:21.351 ************************************ 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:21.351 * Looking for test storage... 00:08:21.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:21.351 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:21.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.612 --rc genhtml_branch_coverage=1 00:08:21.612 --rc genhtml_function_coverage=1 00:08:21.612 --rc genhtml_legend=1 00:08:21.612 --rc geninfo_all_blocks=1 00:08:21.612 --rc geninfo_unexecuted_blocks=1 00:08:21.612 00:08:21.612 ' 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:21.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.612 --rc genhtml_branch_coverage=1 00:08:21.612 --rc genhtml_function_coverage=1 00:08:21.612 --rc genhtml_legend=1 00:08:21.612 --rc geninfo_all_blocks=1 00:08:21.612 --rc geninfo_unexecuted_blocks=1 00:08:21.612 00:08:21.612 ' 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:21.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.612 --rc genhtml_branch_coverage=1 00:08:21.612 --rc genhtml_function_coverage=1 00:08:21.612 --rc genhtml_legend=1 00:08:21.612 --rc geninfo_all_blocks=1 00:08:21.612 --rc geninfo_unexecuted_blocks=1 00:08:21.612 00:08:21.612 ' 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:21.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.612 --rc genhtml_branch_coverage=1 00:08:21.612 --rc genhtml_function_coverage=1 00:08:21.612 --rc genhtml_legend=1 00:08:21.612 --rc geninfo_all_blocks=1 00:08:21.612 --rc geninfo_unexecuted_blocks=1 00:08:21.612 00:08:21.612 ' 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.612 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:21.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:21.613 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:29.757 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:29.757 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:29.757 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:29.757 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:29.757 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:29.757 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:29.757 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:29.757 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:29.757 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:29.757 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:29.757 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:29.757 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:29.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:08:29.757 00:08:29.757 --- 10.0.0.2 ping statistics --- 00:08:29.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.757 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:08:29.757 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:29.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:08:29.757 00:08:29.757 --- 10.0.0.1 ping statistics --- 00:08:29.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.758 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:08:29.758 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.758 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:29.758 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:29.758 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.758 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:29.758 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:29.758 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.758 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:29.758 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:29.758 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:29.758 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:29.758 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:29.758 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:29.758 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=867915 00:08:29.758 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 867915 00:08:29.758 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:29.758 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 867915 ']' 00:08:29.758 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.758 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.758 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.758 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.758 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:29.758 [2024-10-30 13:55:27.255217] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:08:29.758 [2024-10-30 13:55:27.255288] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.758 [2024-10-30 13:55:27.352452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.758 [2024-10-30 13:55:27.401659] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.758 [2024-10-30 13:55:27.401708] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.758 [2024-10-30 13:55:27.401717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.758 [2024-10-30 13:55:27.401724] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.758 [2024-10-30 13:55:27.401730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.758 [2024-10-30 13:55:27.402483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.758 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.758 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:29.758 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:29.758 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.020 [2024-10-30 13:55:28.107740] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.020 [2024-10-30 13:55:28.132009] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.020 malloc0 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:30.020 { 00:08:30.020 "params": { 00:08:30.020 "name": "Nvme$subsystem", 00:08:30.020 "trtype": "$TEST_TRANSPORT", 00:08:30.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:30.020 "adrfam": "ipv4", 00:08:30.020 "trsvcid": "$NVMF_PORT", 00:08:30.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:30.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:30.020 "hdgst": ${hdgst:-false}, 00:08:30.020 "ddgst": ${ddgst:-false} 00:08:30.020 }, 00:08:30.020 "method": "bdev_nvme_attach_controller" 00:08:30.020 } 00:08:30.020 EOF 00:08:30.020 )") 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:30.020 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:30.020 "params": { 00:08:30.020 "name": "Nvme1", 00:08:30.020 "trtype": "tcp", 00:08:30.020 "traddr": "10.0.0.2", 00:08:30.020 "adrfam": "ipv4", 00:08:30.020 "trsvcid": "4420", 00:08:30.020 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:30.020 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:30.020 "hdgst": false, 00:08:30.020 "ddgst": false 00:08:30.020 }, 00:08:30.020 "method": "bdev_nvme_attach_controller" 00:08:30.020 }' 00:08:30.020 [2024-10-30 13:55:28.234556] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:08:30.020 [2024-10-30 13:55:28.234619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid867996 ] 00:08:30.282 [2024-10-30 13:55:28.325539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.282 [2024-10-30 13:55:28.378468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.543 Running I/O for 10 seconds... 00:08:32.431 6445.00 IOPS, 50.35 MiB/s [2024-10-30T12:55:32.118Z] 7525.00 IOPS, 58.79 MiB/s [2024-10-30T12:55:33.062Z] 8264.67 IOPS, 64.57 MiB/s [2024-10-30T12:55:34.005Z] 8636.00 IOPS, 67.47 MiB/s [2024-10-30T12:55:34.949Z] 8859.60 IOPS, 69.22 MiB/s [2024-10-30T12:55:36.046Z] 9008.50 IOPS, 70.38 MiB/s [2024-10-30T12:55:36.743Z] 9115.29 IOPS, 71.21 MiB/s [2024-10-30T12:55:37.751Z] 9193.12 IOPS, 71.82 MiB/s [2024-10-30T12:55:39.136Z] 9251.67 IOPS, 72.28 MiB/s [2024-10-30T12:55:39.136Z] 9298.30 IOPS, 72.64 MiB/s 00:08:40.837 Latency(us) 00:08:40.837 [2024-10-30T12:55:39.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.837 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:40.837 Verification LBA range: start 0x0 length 0x1000 00:08:40.837 Nvme1n1 : 10.05 9263.13 72.37 0.00 0.00 13724.70 2580.48 43253.76 00:08:40.837 [2024-10-30T12:55:39.136Z] =================================================================================================================== 00:08:40.837 [2024-10-30T12:55:39.136Z] Total : 9263.13 72.37 0.00 0.00 13724.70 2580.48 43253.76 00:08:40.837 13:55:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=870239 00:08:40.837 13:55:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:40.837 13:55:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:40.837 13:55:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:40.837 13:55:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:40.837 13:55:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:40.837 13:55:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:40.837 13:55:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:40.837 13:55:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:40.837 { 00:08:40.837 "params": { 00:08:40.837 "name": "Nvme$subsystem", 00:08:40.837 "trtype": "$TEST_TRANSPORT", 00:08:40.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.837 "adrfam": "ipv4", 00:08:40.837 "trsvcid": "$NVMF_PORT", 00:08:40.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.837 "hdgst": ${hdgst:-false}, 00:08:40.837 "ddgst": ${ddgst:-false} 00:08:40.837 }, 00:08:40.837 "method": "bdev_nvme_attach_controller" 00:08:40.837 } 00:08:40.837 EOF 00:08:40.837 )") 00:08:40.837 [2024-10-30 13:55:38.891105] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.837 [2024-10-30 13:55:38.891133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.837 13:55:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:40.837 13:55:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:40.837 13:55:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:40.837 13:55:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:40.837 "params": { 00:08:40.837 "name": "Nvme1", 00:08:40.837 "trtype": "tcp", 00:08:40.837 "traddr": "10.0.0.2", 00:08:40.837 "adrfam": "ipv4", 00:08:40.837 "trsvcid": "4420", 00:08:40.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:40.837 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:40.837 "hdgst": false, 00:08:40.837 "ddgst": false 00:08:40.837 }, 00:08:40.837 "method": "bdev_nvme_attach_controller" 00:08:40.837 }' 00:08:40.837 [2024-10-30 13:55:38.903109] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.837 [2024-10-30 13:55:38.903118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.837 [2024-10-30 13:55:38.915138] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.837 [2024-10-30 13:55:38.915146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.837 [2024-10-30 13:55:38.927169] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.837 [2024-10-30 13:55:38.927176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.837 [2024-10-30 13:55:38.936031] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:08:40.837 [2024-10-30 13:55:38.936080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid870239 ] 00:08:40.837 [2024-10-30 13:55:38.939199] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.837 [2024-10-30 13:55:38.939206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.837 [2024-10-30 13:55:38.951230] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.837 [2024-10-30 13:55:38.951243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.837 [2024-10-30 13:55:38.963262] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.837 [2024-10-30 13:55:38.963270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.837 [2024-10-30 13:55:38.975292] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.837 [2024-10-30 13:55:38.975299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.837 [2024-10-30 13:55:38.987324] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.837 [2024-10-30 13:55:38.987331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.837 [2024-10-30 13:55:38.999355] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.837 [2024-10-30 13:55:38.999362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.837 [2024-10-30 13:55:39.011385] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.837 [2024-10-30 13:55:39.011393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.837 [2024-10-30 13:55:39.017916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.837 [2024-10-30 13:55:39.023416] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.837 [2024-10-30 13:55:39.023424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.837 [2024-10-30 13:55:39.035448] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.837 [2024-10-30 13:55:39.035455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.837 [2024-10-30 13:55:39.047450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.837 [2024-10-30 13:55:39.047480] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.837 [2024-10-30 13:55:39.047487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.837 [2024-10-30 13:55:39.059515] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.837 [2024-10-30 13:55:39.059523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.837 [2024-10-30 13:55:39.071545] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.837 [2024-10-30 13:55:39.071558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.837 [2024-10-30 13:55:39.083574] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.838 [2024-10-30 13:55:39.083584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.838 [2024-10-30 13:55:39.095603] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.838 [2024-10-30 13:55:39.095612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.838 [2024-10-30 13:55:39.107632] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.838 [2024-10-30 13:55:39.107639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.838 [2024-10-30 13:55:39.119673] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.838 [2024-10-30 13:55:39.119689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.838 [2024-10-30 13:55:39.131698] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.838 [2024-10-30 13:55:39.131707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.099 [2024-10-30 13:55:39.143730] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.099 [2024-10-30 13:55:39.143740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.099 [2024-10-30 13:55:39.155762] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.099 [2024-10-30 13:55:39.155770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.099 [2024-10-30 13:55:39.167791] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.099 [2024-10-30 13:55:39.167798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.099 [2024-10-30 13:55:39.179820] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.099 [2024-10-30 13:55:39.179827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.099 [2024-10-30 13:55:39.191852] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.099 [2024-10-30 13:55:39.191861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.099 [2024-10-30 13:55:39.203881] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.099 [2024-10-30 13:55:39.203890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.099 [2024-10-30 13:55:39.215913] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.099 [2024-10-30 13:55:39.215919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.099 [2024-10-30 13:55:39.227943] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.099 [2024-10-30 13:55:39.227950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.099 [2024-10-30 13:55:39.239975] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.099 [2024-10-30 13:55:39.239982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.099 [2024-10-30 13:55:39.252017] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.099 [2024-10-30 13:55:39.252026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.099 [2024-10-30 13:55:39.264042] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.099 [2024-10-30 13:55:39.264051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.099 [2024-10-30 13:55:39.276071] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.099 [2024-10-30 13:55:39.276077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.099 [2024-10-30 13:55:39.288105] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.099 [2024-10-30 13:55:39.288113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.099 [2024-10-30 13:55:39.300135] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.099 [2024-10-30 13:55:39.300141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.099 [2024-10-30 13:55:39.312165] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.099 [2024-10-30 13:55:39.312172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.099 [2024-10-30 13:55:39.324197] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.099 [2024-10-30 13:55:39.324204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.099 [2024-10-30 13:55:39.336229] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.099 [2024-10-30 13:55:39.336237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.099 [2024-10-30 13:55:39.348268] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.099 [2024-10-30 13:55:39.348283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.099 Running I/O for 5 seconds... 00:08:41.099 [2024-10-30 13:55:39.362896] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.099 [2024-10-30 13:55:39.362911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.099 [2024-10-30 13:55:39.376247] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.099 [2024-10-30 13:55:39.376264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.099 [2024-10-30 13:55:39.390104] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.099 [2024-10-30 13:55:39.390120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.360 [2024-10-30 13:55:39.402808] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.360 [2024-10-30 13:55:39.402827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.360 [2024-10-30 13:55:39.415389] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.360 [2024-10-30 13:55:39.415404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.360 [2024-10-30 13:55:39.428612] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.360 [2024-10-30 13:55:39.428627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.360 [2024-10-30 13:55:39.442451] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.360 [2024-10-30 13:55:39.442466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.360 [2024-10-30 13:55:39.455661] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.361 [2024-10-30 13:55:39.455676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.361 [2024-10-30 13:55:39.468147] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.361 [2024-10-30 13:55:39.468161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.361 [2024-10-30 13:55:39.481412] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.361 [2024-10-30 13:55:39.481427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.361 [2024-10-30 13:55:39.494573] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.361 [2024-10-30 13:55:39.494588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.361 [2024-10-30 13:55:39.507838] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.361 [2024-10-30 13:55:39.507852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.361 [2024-10-30 13:55:39.521304] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.361 [2024-10-30 13:55:39.521319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.361 [2024-10-30 13:55:39.533944] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.361 [2024-10-30 13:55:39.533959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.361 [2024-10-30 13:55:39.547061] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.361 [2024-10-30 13:55:39.547076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.361 [2024-10-30 13:55:39.560259] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.361 [2024-10-30 13:55:39.560273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.361 [2024-10-30 13:55:39.573742] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.361 [2024-10-30 13:55:39.573761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.361 [2024-10-30 13:55:39.587119] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.361 [2024-10-30 13:55:39.587133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.361 [2024-10-30 13:55:39.600548] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.361 [2024-10-30 13:55:39.600562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.361 [2024-10-30 13:55:39.613914] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.361 [2024-10-30 13:55:39.613929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.361 [2024-10-30 13:55:39.626608] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.361 [2024-10-30 13:55:39.626623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.361 [2024-10-30 13:55:39.639307] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.361 [2024-10-30 13:55:39.639321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.361 [2024-10-30 13:55:39.651910] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.361 [2024-10-30 13:55:39.651933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.622 [2024-10-30 13:55:39.664493] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.622 [2024-10-30 13:55:39.664509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.622 [2024-10-30 13:55:39.677448] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.622 [2024-10-30 13:55:39.677463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.622 [2024-10-30 13:55:39.690714] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.622 [2024-10-30 13:55:39.690729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.622 [2024-10-30 13:55:39.704141] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.622 [2024-10-30 13:55:39.704156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.622 [2024-10-30 13:55:39.717237] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.622 [2024-10-30 13:55:39.717252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.622 [2024-10-30 13:55:39.730421] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.622 [2024-10-30 13:55:39.730436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.622 [2024-10-30 13:55:39.743622] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.622 [2024-10-30 13:55:39.743637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.622 [2024-10-30 13:55:39.756636] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.622 [2024-10-30 13:55:39.756651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.622 [2024-10-30 13:55:39.769938] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.622 [2024-10-30 13:55:39.769953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.622 [2024-10-30 13:55:39.783329] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.622 [2024-10-30 13:55:39.783344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.622 [2024-10-30 13:55:39.796984] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.622 [2024-10-30 13:55:39.796999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.622 [2024-10-30 13:55:39.809689] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.622 [2024-10-30 13:55:39.809704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.622 [2024-10-30 13:55:39.822463] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.622 [2024-10-30 13:55:39.822477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.622 [2024-10-30 13:55:39.835980] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.622 [2024-10-30 13:55:39.835994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.622 [2024-10-30 13:55:39.849600] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.622 [2024-10-30 13:55:39.849614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.622 [2024-10-30 13:55:39.862334] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.622 [2024-10-30 13:55:39.862348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.622 [2024-10-30 13:55:39.875650] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.622 [2024-10-30 13:55:39.875665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.622 [2024-10-30 13:55:39.889351] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.622 [2024-10-30 13:55:39.889366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.622 [2024-10-30 13:55:39.902355] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.622 [2024-10-30 13:55:39.902373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.622 [2024-10-30 13:55:39.915185] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.622 [2024-10-30 13:55:39.915200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.883 [2024-10-30 13:55:39.927983] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.883 [2024-10-30 13:55:39.927998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.883 [2024-10-30 13:55:39.940965] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.883 [2024-10-30 13:55:39.940979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.883 [2024-10-30 13:55:39.954494] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.883 [2024-10-30 13:55:39.954509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.883 [2024-10-30 13:55:39.967130] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.883 [2024-10-30 13:55:39.967145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.883 [2024-10-30 13:55:39.979750] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.883 [2024-10-30 13:55:39.979764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.883 [2024-10-30 13:55:39.991957] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.883 [2024-10-30 13:55:39.991972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.883 [2024-10-30 13:55:40.005650] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.883 [2024-10-30 13:55:40.005665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.883 [2024-10-30 13:55:40.017996] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.883 [2024-10-30 13:55:40.018012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.883 [2024-10-30 13:55:40.030255] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.883 [2024-10-30 13:55:40.030270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.883 [2024-10-30 13:55:40.043477] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.883 [2024-10-30 13:55:40.043492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.883 [2024-10-30 13:55:40.056492] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.883 [2024-10-30 13:55:40.056509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.883 [2024-10-30 13:55:40.069592] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.883 [2024-10-30 13:55:40.069608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.883 [2024-10-30 13:55:40.083013] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.883 [2024-10-30 13:55:40.083029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.883 [2024-10-30 13:55:40.096161] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.883 [2024-10-30 13:55:40.096177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.883 [2024-10-30 13:55:40.109536] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.883 [2024-10-30 13:55:40.109552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.883 [2024-10-30 13:55:40.122646] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.883 [2024-10-30 13:55:40.122661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.883 [2024-10-30 13:55:40.135819] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.883 [2024-10-30 13:55:40.135834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.883 [2024-10-30 13:55:40.148951] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.883 [2024-10-30 13:55:40.148966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.883 [2024-10-30 13:55:40.161981] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.883 [2024-10-30 13:55:40.161995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.883 [2024-10-30 13:55:40.175186] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.883 [2024-10-30 13:55:40.175200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.145 [2024-10-30 13:55:40.188053] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.146 [2024-10-30 13:55:40.188068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.146 [2024-10-30 13:55:40.201251] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.146 [2024-10-30 13:55:40.201265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.146 [2024-10-30 13:55:40.214148] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.146 [2024-10-30 13:55:40.214162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.146 [2024-10-30 13:55:40.226870] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.146 [2024-10-30 13:55:40.226884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.146 [2024-10-30 13:55:40.240502] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.146 [2024-10-30 13:55:40.240517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.146 [2024-10-30 13:55:40.253140] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.146 [2024-10-30 13:55:40.253154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.146 [2024-10-30 13:55:40.266216] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.146 [2024-10-30 13:55:40.266231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.146 [2024-10-30 13:55:40.279474] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.146 [2024-10-30 13:55:40.279490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.146 [2024-10-30 13:55:40.293011] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.146 [2024-10-30 13:55:40.293026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.146 [2024-10-30 13:55:40.305740] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.146 [2024-10-30 13:55:40.305760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.146 [2024-10-30 13:55:40.319178] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.146 [2024-10-30 13:55:40.319193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.146 [2024-10-30 13:55:40.332634] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.146 [2024-10-30 13:55:40.332649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.146 [2024-10-30 13:55:40.345289] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.146 [2024-10-30 13:55:40.345304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.146 19036.00 IOPS, 148.72 MiB/s [2024-10-30T12:55:40.445Z] [2024-10-30 13:55:40.358712] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.146 [2024-10-30 13:55:40.358727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.146 [2024-10-30 13:55:40.371993] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.146 [2024-10-30 13:55:40.372009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.146 [2024-10-30 13:55:40.385411] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.146 [2024-10-30 13:55:40.385426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.146 [2024-10-30 13:55:40.398558] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.146 [2024-10-30 13:55:40.398573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.146 [2024-10-30 13:55:40.411968] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.146 [2024-10-30 13:55:40.411982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.146 [2024-10-30 13:55:40.424618] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.146 [2024-10-30 13:55:40.424633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.146 [2024-10-30 13:55:40.437074] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.146 [2024-10-30 13:55:40.437089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.408 [2024-10-30 13:55:40.450187] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.408 [2024-10-30 13:55:40.450203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.408 [2024-10-30 13:55:40.463215] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.408 [2024-10-30 13:55:40.463229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.408 [2024-10-30 13:55:40.476099] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.408 [2024-10-30 13:55:40.476114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.408 [2024-10-30 13:55:40.489602] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.408 [2024-10-30 13:55:40.489616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.408 [2024-10-30 13:55:40.502350] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.408 [2024-10-30 13:55:40.502365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.408 [2024-10-30 13:55:40.515693] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.408 [2024-10-30 13:55:40.515708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.408 [2024-10-30 13:55:40.528440] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.408 [2024-10-30 13:55:40.528456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.408 [2024-10-30 13:55:40.541957] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.408 [2024-10-30 13:55:40.541971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.408 [2024-10-30 13:55:40.555067] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.408 [2024-10-30 13:55:40.555082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.408 [2024-10-30 13:55:40.567173] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.408 [2024-10-30 13:55:40.567188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.408 [2024-10-30 13:55:40.580326] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.408 [2024-10-30 13:55:40.580342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.408 [2024-10-30 13:55:40.593639] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.408 [2024-10-30 13:55:40.593654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.408 [2024-10-30 13:55:40.606457] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.408 [2024-10-30 13:55:40.606471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.408 [2024-10-30 13:55:40.619364] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.408 [2024-10-30 13:55:40.619380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.408 [2024-10-30 13:55:40.632296] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.408 [2024-10-30 13:55:40.632316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.408 [2024-10-30 13:55:40.645602] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.408 [2024-10-30 13:55:40.645617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.408 [2024-10-30 13:55:40.658237] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.408 [2024-10-30 13:55:40.658252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.408 [2024-10-30 13:55:40.671496] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.408 [2024-10-30 13:55:40.671511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.408 [2024-10-30 13:55:40.684815] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.408 [2024-10-30 13:55:40.684831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.408 [2024-10-30 13:55:40.697782] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.408 [2024-10-30 13:55:40.697797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-10-30 13:55:40.710742] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-10-30 13:55:40.710762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-10-30 13:55:40.724101] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-10-30 13:55:40.724116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-10-30 13:55:40.737002] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-10-30 13:55:40.737017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-10-30 13:55:40.749834] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-10-30 13:55:40.749849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-10-30 13:55:40.763046] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-10-30 13:55:40.763060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-10-30 13:55:40.775609] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-10-30 13:55:40.775623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-10-30 13:55:40.788899] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-10-30 13:55:40.788913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-10-30 13:55:40.801491] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-10-30 13:55:40.801505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-10-30 13:55:40.814566] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-10-30 13:55:40.814580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-10-30 13:55:40.827475] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-10-30 13:55:40.827490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-10-30 13:55:40.841148] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-10-30 13:55:40.841163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-10-30 13:55:40.853371] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-10-30 13:55:40.853386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-10-30 13:55:40.866551] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-10-30 13:55:40.866565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-10-30 13:55:40.880108] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-10-30 13:55:40.880126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-10-30 13:55:40.893010] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-10-30 13:55:40.893025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-10-30 13:55:40.906704] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-10-30 13:55:40.906718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-10-30 13:55:40.920373] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-10-30 13:55:40.920388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-10-30 13:55:40.932981] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-10-30 13:55:40.932995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-10-30 13:55:40.945970] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-10-30 13:55:40.945985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-10-30 13:55:40.958887] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-10-30 13:55:40.958901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.931 [2024-10-30 13:55:40.971397] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.931 [2024-10-30 13:55:40.971411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.931 [2024-10-30 13:55:40.984716] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.931 [2024-10-30 13:55:40.984731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.931 [2024-10-30 13:55:40.998266] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.931 [2024-10-30 13:55:40.998280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.931 [2024-10-30 13:55:41.011958] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.931 [2024-10-30 13:55:41.011972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.931 [2024-10-30 13:55:41.025168] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.931 [2024-10-30 13:55:41.025183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.931 [2024-10-30 13:55:41.037573] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.931 [2024-10-30 13:55:41.037588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.931 [2024-10-30 13:55:41.050642] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.931 [2024-10-30 13:55:41.050657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.931 [2024-10-30 13:55:41.064227] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.931 [2024-10-30 13:55:41.064241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.931 [2024-10-30 13:55:41.077029] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.931 [2024-10-30 13:55:41.077043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.931 [2024-10-30 13:55:41.089286] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.931 [2024-10-30 13:55:41.089300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.931 [2024-10-30 13:55:41.102470] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.931 [2024-10-30 13:55:41.102485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.931 [2024-10-30 13:55:41.115631] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.931 [2024-10-30 13:55:41.115646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.931 [2024-10-30 13:55:41.129268] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.931 [2024-10-30 13:55:41.129286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.931 [2024-10-30 13:55:41.142305] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.931 [2024-10-30 13:55:41.142320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.931 [2024-10-30 13:55:41.155578] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.931 [2024-10-30 13:55:41.155593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.931 [2024-10-30 13:55:41.168742] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.931 [2024-10-30 13:55:41.168761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.931 [2024-10-30 13:55:41.182010] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.931 [2024-10-30 13:55:41.182024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.931 [2024-10-30 13:55:41.195332] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.931 [2024-10-30 13:55:41.195346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.931 [2024-10-30 13:55:41.208190] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.931 [2024-10-30 13:55:41.208204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.931 [2024-10-30 13:55:41.220767] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.931 [2024-10-30 13:55:41.220781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.192 [2024-10-30 13:55:41.234124] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.192 [2024-10-30 13:55:41.234139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.192 [2024-10-30 13:55:41.247687] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.192 [2024-10-30 13:55:41.247701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.192 [2024-10-30 13:55:41.260511] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.192 [2024-10-30 13:55:41.260526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.192 [2024-10-30 13:55:41.274000] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.192 [2024-10-30 13:55:41.274015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.192 [2024-10-30 13:55:41.286893] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.192 [2024-10-30 13:55:41.286907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.192 [2024-10-30 13:55:41.299806] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.192 [2024-10-30 13:55:41.299820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.192 [2024-10-30 13:55:41.312425] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.192 [2024-10-30 13:55:41.312439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.192 [2024-10-30 13:55:41.324978] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.192 [2024-10-30 13:55:41.324993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.192 [2024-10-30 13:55:41.337487] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.193 [2024-10-30 13:55:41.337502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.193 [2024-10-30 13:55:41.350442] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.193 [2024-10-30 13:55:41.350456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.193 19126.50 IOPS, 149.43 MiB/s [2024-10-30T12:55:41.492Z] [2024-10-30 13:55:41.364151] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.193 [2024-10-30 13:55:41.364166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.193 [2024-10-30 13:55:41.376837] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.193 [2024-10-30 13:55:41.376851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.193 [2024-10-30 13:55:41.390086] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.193 [2024-10-30 13:55:41.390100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.193 [2024-10-30 13:55:41.403310] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.193 [2024-10-30 13:55:41.403324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.193 [2024-10-30 13:55:41.415876] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.193 [2024-10-30 13:55:41.415890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.193 [2024-10-30 13:55:41.429187] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.193 [2024-10-30 13:55:41.429201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.193 [2024-10-30 13:55:41.442515] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.193 [2024-10-30 13:55:41.442529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.193 [2024-10-30 13:55:41.455885] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.193 [2024-10-30 13:55:41.455899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.193 [2024-10-30 13:55:41.468616] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.193 [2024-10-30 13:55:41.468631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.193 [2024-10-30 13:55:41.481504] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.193 [2024-10-30 13:55:41.481518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.453 [2024-10-30 13:55:41.494892] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.453 [2024-10-30 13:55:41.494907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.453 [2024-10-30 13:55:41.507202] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.453 [2024-10-30 13:55:41.507216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.453 [2024-10-30 13:55:41.519996] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.453 [2024-10-30 13:55:41.520010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.453 [2024-10-30 13:55:41.533020] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.453 [2024-10-30 13:55:41.533034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.453 [2024-10-30 13:55:41.546127] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.453 [2024-10-30 13:55:41.546142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.453 [2024-10-30 13:55:41.559484] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.453 [2024-10-30 13:55:41.559499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.453 [2024-10-30 13:55:41.573057] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.453 [2024-10-30 13:55:41.573071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.453 [2024-10-30 13:55:41.586606] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.453 [2024-10-30 13:55:41.586621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.453 [2024-10-30 13:55:41.599593] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.453 [2024-10-30 13:55:41.599608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.453 [2024-10-30 13:55:41.613266] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.453 [2024-10-30 13:55:41.613281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.453 [2024-10-30 13:55:41.626306] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.453 [2024-10-30 13:55:41.626321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.453 [2024-10-30 13:55:41.639847] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.453 [2024-10-30 13:55:41.639862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.453 [2024-10-30 13:55:41.653327] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.453 [2024-10-30 13:55:41.653342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.453 [2024-10-30 13:55:41.665878] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.453 [2024-10-30 13:55:41.665892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.453 [2024-10-30 13:55:41.679199] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.453 [2024-10-30 13:55:41.679213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.453 [2024-10-30 13:55:41.691744] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.453 [2024-10-30 13:55:41.691762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.453 [2024-10-30 13:55:41.704336] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.453 [2024-10-30 13:55:41.704351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.453 [2024-10-30 13:55:41.717495] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.453 [2024-10-30 13:55:41.717510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.453 [2024-10-30 13:55:41.730404] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.453 [2024-10-30 13:55:41.730419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.453 [2024-10-30 13:55:41.743263] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.453 [2024-10-30 13:55:41.743279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.714 [2024-10-30 13:55:41.755989] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.714 [2024-10-30 13:55:41.756005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.714 [2024-10-30 13:55:41.768696] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.714 [2024-10-30 13:55:41.768712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.714 [2024-10-30 13:55:41.781506] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.714 [2024-10-30 13:55:41.781521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.714 [2024-10-30 13:55:41.794328] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.714 [2024-10-30 13:55:41.794343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.714 [2024-10-30 13:55:41.807695] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.714 [2024-10-30 13:55:41.807710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.714 [2024-10-30 13:55:41.820891] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.714 [2024-10-30 13:55:41.820906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.714 [2024-10-30 13:55:41.834181] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.714 [2024-10-30 13:55:41.834197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.714 [2024-10-30 13:55:41.846578] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.714 [2024-10-30 13:55:41.846592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.714 [2024-10-30 13:55:41.859509] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.714 [2024-10-30 13:55:41.859524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.714 [2024-10-30 13:55:41.873284] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.714 [2024-10-30 13:55:41.873299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.714 [2024-10-30 13:55:41.886352] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.714 [2024-10-30 13:55:41.886366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.714 [2024-10-30 13:55:41.900113] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.714 [2024-10-30 13:55:41.900128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.714 [2024-10-30 13:55:41.913719] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.714 [2024-10-30 13:55:41.913734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.714 [2024-10-30 13:55:41.926984] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.714 [2024-10-30 13:55:41.926999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.714 [2024-10-30 13:55:41.939575] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.714 [2024-10-30 13:55:41.939591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.714 [2024-10-30 13:55:41.953170] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.714 [2024-10-30 13:55:41.953185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.714 [2024-10-30 13:55:41.965616] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.714 [2024-10-30 13:55:41.965631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.714 [2024-10-30 13:55:41.978701] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.714 [2024-10-30 13:55:41.978716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.714 [2024-10-30 13:55:41.992168] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.714 [2024-10-30 13:55:41.992183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.714 [2024-10-30 13:55:42.005597] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.714 [2024-10-30 13:55:42.005612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.976 [2024-10-30 13:55:42.019190] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.976 [2024-10-30 13:55:42.019206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.976 [2024-10-30 13:55:42.032333] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.976 [2024-10-30 13:55:42.032347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.976 [2024-10-30 13:55:42.045671] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.976 [2024-10-30 13:55:42.045686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.976 [2024-10-30 13:55:42.059263] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.976 [2024-10-30 13:55:42.059278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.976 [2024-10-30 13:55:42.072739] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.976 [2024-10-30 13:55:42.072758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.976 [2024-10-30 13:55:42.085847] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.976 [2024-10-30 13:55:42.085862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.976 [2024-10-30 13:55:42.098737] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.976 [2024-10-30 13:55:42.098757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.976 [2024-10-30 13:55:42.111536] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.976 [2024-10-30 13:55:42.111551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.976 [2024-10-30 13:55:42.124275] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.976 [2024-10-30 13:55:42.124290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.976 [2024-10-30 13:55:42.137347] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.976 [2024-10-30 13:55:42.137362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.976 [2024-10-30 13:55:42.149858] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.976 [2024-10-30 13:55:42.149873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.976 [2024-10-30 13:55:42.163451] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.977 [2024-10-30 13:55:42.163466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.977 [2024-10-30 13:55:42.176298] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.977 [2024-10-30 13:55:42.176313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.977 [2024-10-30 13:55:42.190057] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.977 [2024-10-30 13:55:42.190073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.977 [2024-10-30 13:55:42.203526] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.977 [2024-10-30 13:55:42.203542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.977 [2024-10-30 13:55:42.217046] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.977 [2024-10-30 13:55:42.217062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.977 [2024-10-30 13:55:42.229833] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.977 [2024-10-30 13:55:42.229848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.977 [2024-10-30 13:55:42.242333] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.977 [2024-10-30 13:55:42.242347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.977 [2024-10-30 13:55:42.255908] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.977 [2024-10-30 13:55:42.255924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.977 [2024-10-30 13:55:42.269118] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.977 [2024-10-30 13:55:42.269134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.238 [2024-10-30 13:55:42.282767] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.238 [2024-10-30 13:55:42.282783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.238 [2024-10-30 13:55:42.296175] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.238 [2024-10-30 13:55:42.296190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.238 [2024-10-30 13:55:42.309523] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.238 [2024-10-30 13:55:42.309538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.238 [2024-10-30 13:55:42.322530] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.238 [2024-10-30 13:55:42.322545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.238 [2024-10-30 13:55:42.335777] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.238 [2024-10-30 13:55:42.335792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.239 [2024-10-30 13:55:42.349009] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.239 [2024-10-30 13:55:42.349024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.239 19151.33 IOPS, 149.62 MiB/s [2024-10-30T12:55:42.538Z] [2024-10-30 13:55:42.362227] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.239 [2024-10-30 13:55:42.362246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.239 [2024-10-30 13:55:42.375060] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.239 [2024-10-30 13:55:42.375075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.239 [2024-10-30 13:55:42.388264] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.239 [2024-10-30 13:55:42.388280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.239 [2024-10-30 13:55:42.401744] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.239 [2024-10-30 13:55:42.401763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.239 [2024-10-30 13:55:42.415306] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.239 [2024-10-30 13:55:42.415321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.239 [2024-10-30 13:55:42.427873] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.239 [2024-10-30 13:55:42.427887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.239 [2024-10-30 13:55:42.440504] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.239 [2024-10-30 13:55:42.440518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.239 [2024-10-30 13:55:42.453847] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.239 [2024-10-30 13:55:42.453862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.239 [2024-10-30 13:55:42.467391] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.239 [2024-10-30 13:55:42.467405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.239 [2024-10-30 13:55:42.480617] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.239 [2024-10-30 13:55:42.480632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.239 [2024-10-30 13:55:42.493682] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.239 [2024-10-30 13:55:42.493697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.239 [2024-10-30 13:55:42.506705] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.239 [2024-10-30 13:55:42.506720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.239 [2024-10-30 13:55:42.519543] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.239 [2024-10-30 13:55:42.519557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.239 [2024-10-30 13:55:42.532719] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.239 [2024-10-30 13:55:42.532733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.500 [2024-10-30 13:55:42.545541] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.500 [2024-10-30 13:55:42.545555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.500 [2024-10-30 13:55:42.558570] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.500 [2024-10-30 13:55:42.558585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.500 [2024-10-30 13:55:42.571299] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.500 [2024-10-30 13:55:42.571314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.500 [2024-10-30 13:55:42.584086] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.500 [2024-10-30 13:55:42.584100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.500 [2024-10-30 13:55:42.597656] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.500 [2024-10-30 13:55:42.597671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.500 [2024-10-30 13:55:42.611365] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.500 [2024-10-30 13:55:42.611383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.500 [2024-10-30 13:55:42.624829] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.500 [2024-10-30 13:55:42.624844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.500 [2024-10-30 13:55:42.637177] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.500 [2024-10-30 13:55:42.637191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.500 [2024-10-30 13:55:42.649750] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.500 [2024-10-30 13:55:42.649764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.500 [2024-10-30 13:55:42.663163] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.500 [2024-10-30 13:55:42.663177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.500 [2024-10-30 13:55:42.676285] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.500 [2024-10-30 13:55:42.676299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.500 [2024-10-30 13:55:42.689219] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.500 [2024-10-30 13:55:42.689233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.500 [2024-10-30 13:55:42.701605] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.500 [2024-10-30 13:55:42.701620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.500 [2024-10-30 13:55:42.715090] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.500 [2024-10-30 13:55:42.715105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.500 [2024-10-30 13:55:42.728564] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.500 [2024-10-30 13:55:42.728579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.500 [2024-10-30 13:55:42.741792] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.500 [2024-10-30 13:55:42.741808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.500 [2024-10-30 13:55:42.755215] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.500 [2024-10-30 13:55:42.755230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.500 [2024-10-30 13:55:42.768038] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.500 [2024-10-30 13:55:42.768053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.500 [2024-10-30 13:55:42.780757] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.500 [2024-10-30 13:55:42.780771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.500 [2024-10-30 13:55:42.793839] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.500 [2024-10-30 13:55:42.793853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.761 [2024-10-30 13:55:42.806973] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.761 [2024-10-30 13:55:42.806989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.761 [2024-10-30 13:55:42.820542] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.761 [2024-10-30 13:55:42.820556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.761 [2024-10-30 13:55:42.833580] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.761 [2024-10-30 13:55:42.833594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.761 [2024-10-30 13:55:42.847129] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.761 [2024-10-30 13:55:42.847143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.761 [2024-10-30 13:55:42.859626] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.761 [2024-10-30 13:55:42.859644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.761 [2024-10-30 13:55:42.872864] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.761 [2024-10-30 13:55:42.872879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.761 [2024-10-30 13:55:42.886565] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.761 [2024-10-30 13:55:42.886580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.761 [2024-10-30 13:55:42.899676] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.761 [2024-10-30 13:55:42.899691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.761 [2024-10-30 13:55:42.913049] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.761 [2024-10-30 13:55:42.913064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.761 [2024-10-30 13:55:42.926154] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.761 [2024-10-30 13:55:42.926168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.761 [2024-10-30 13:55:42.938380] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.761 [2024-10-30 13:55:42.938394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.761 [2024-10-30 13:55:42.950743] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.761 [2024-10-30 13:55:42.950764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.761 [2024-10-30 13:55:42.963081] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.761 [2024-10-30 13:55:42.963095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.761 [2024-10-30 13:55:42.975682] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.761 [2024-10-30 13:55:42.975698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.761 [2024-10-30 13:55:42.989221] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.761 [2024-10-30 13:55:42.989236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.761 [2024-10-30 13:55:43.002380] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.761 [2024-10-30 13:55:43.002394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.761 [2024-10-30 13:55:43.015177] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.761 [2024-10-30 13:55:43.015191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.761 [2024-10-30 13:55:43.028419] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.761 [2024-10-30 13:55:43.028434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.761 [2024-10-30 13:55:43.042274] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.761 [2024-10-30 13:55:43.042288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.761 [2024-10-30 13:55:43.054469] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.761 [2024-10-30 13:55:43.054483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.022 [2024-10-30 13:55:43.068621] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.022 [2024-10-30 13:55:43.068636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.022 [2024-10-30 13:55:43.081654] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.022 [2024-10-30 13:55:43.081669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.022 [2024-10-30 13:55:43.094812] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.022 [2024-10-30 13:55:43.094826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.022 [2024-10-30 13:55:43.107866] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.023 [2024-10-30 13:55:43.107881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.023 [2024-10-30 13:55:43.120997] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.023 [2024-10-30 13:55:43.121011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.023 [2024-10-30 13:55:43.134153] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.023 [2024-10-30 13:55:43.134167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.023 [2024-10-30 13:55:43.147290] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.023 [2024-10-30 13:55:43.147304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.023 [2024-10-30 13:55:43.160601] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.023 [2024-10-30 13:55:43.160615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.023 [2024-10-30 13:55:43.173561] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.023 [2024-10-30 13:55:43.173575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.023 [2024-10-30 13:55:43.186833] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.023 [2024-10-30 13:55:43.186848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.023 [2024-10-30 13:55:43.200317] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.023 [2024-10-30 13:55:43.200331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.023 [2024-10-30 13:55:43.213945] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.023 [2024-10-30 13:55:43.213960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.023 [2024-10-30 13:55:43.226322] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.023 [2024-10-30 13:55:43.226336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.023 [2024-10-30 13:55:43.240040] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.023 [2024-10-30 13:55:43.240055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.023 [2024-10-30 13:55:43.252737] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.023 [2024-10-30 13:55:43.252756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.023 [2024-10-30 13:55:43.265139] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.023 [2024-10-30 13:55:43.265154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.023 [2024-10-30 13:55:43.277823] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.023 [2024-10-30 13:55:43.277837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.023 [2024-10-30 13:55:43.290513] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.023 [2024-10-30 13:55:43.290527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.023 [2024-10-30 13:55:43.303892] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.023 [2024-10-30 13:55:43.303906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.023 [2024-10-30 13:55:43.316632] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.023 [2024-10-30 13:55:43.316647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.284 [2024-10-30 13:55:43.329468] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.284 [2024-10-30 13:55:43.329482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.284 [2024-10-30 13:55:43.342811] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.284 [2024-10-30 13:55:43.342826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.284 [2024-10-30 13:55:43.356061] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.284 [2024-10-30 13:55:43.356075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.284 19177.25 IOPS, 149.82 MiB/s [2024-10-30T12:55:43.583Z] [2024-10-30 13:55:43.368737] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.284 [2024-10-30 13:55:43.368755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.284 [2024-10-30 13:55:43.381188] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.284 [2024-10-30 13:55:43.381203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.284 [2024-10-30 13:55:43.394262] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.284 [2024-10-30 13:55:43.394278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.284 [2024-10-30 13:55:43.407847] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.284 [2024-10-30 13:55:43.407861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.284 [2024-10-30 13:55:43.420502] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.284 [2024-10-30 13:55:43.420518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.284 [2024-10-30 13:55:43.433016] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.284 [2024-10-30 13:55:43.433031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.284 [2024-10-30 13:55:43.446299] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.284 [2024-10-30 13:55:43.446315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.284 [2024-10-30 13:55:43.459017] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.284 [2024-10-30 13:55:43.459032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.284 [2024-10-30 13:55:43.472852] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.284 [2024-10-30 13:55:43.472867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.284 [2024-10-30 13:55:43.485904] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.284 [2024-10-30 13:55:43.485919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.284 [2024-10-30 13:55:43.499330] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.284 [2024-10-30 13:55:43.499345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.284 [2024-10-30 13:55:43.512822] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.284 [2024-10-30 13:55:43.512837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.284 [2024-10-30 13:55:43.525853] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.284 [2024-10-30 13:55:43.525868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.284 [2024-10-30 13:55:43.539099] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.284 [2024-10-30 13:55:43.539114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.284 [2024-10-30 13:55:43.552427] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.284 [2024-10-30 13:55:43.552442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.284 [2024-10-30 13:55:43.566204] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.284 [2024-10-30 13:55:43.566219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.284 [2024-10-30 13:55:43.579559] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.284 [2024-10-30 13:55:43.579575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.545 [2024-10-30 13:55:43.592720] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.545 [2024-10-30 13:55:43.592735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.545 [2024-10-30 13:55:43.605785] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.545 [2024-10-30 13:55:43.605799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.545 [2024-10-30 13:55:43.618569] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.545 [2024-10-30 13:55:43.618584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.545 [2024-10-30 13:55:43.631303] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.545 [2024-10-30 13:55:43.631317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.545 [2024-10-30 13:55:43.644950] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.545 [2024-10-30 13:55:43.644964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.545 [2024-10-30 13:55:43.658503] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.545 [2024-10-30 13:55:43.658518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.545 [2024-10-30 13:55:43.671926] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.545 [2024-10-30 13:55:43.671941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.545 [2024-10-30 13:55:43.685215] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.545 [2024-10-30 13:55:43.685230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.545 [2024-10-30 13:55:43.698598] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.545 [2024-10-30 13:55:43.698613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.545 [2024-10-30 13:55:43.711129] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.545 [2024-10-30 13:55:43.711144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.545 [2024-10-30 13:55:43.724316] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.545 [2024-10-30 13:55:43.724332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.545 [2024-10-30 13:55:43.737707] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.545 [2024-10-30 13:55:43.737722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.545 [2024-10-30 13:55:43.751170] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.545 [2024-10-30 13:55:43.751184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.545 [2024-10-30 13:55:43.764651] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.545 [2024-10-30 13:55:43.764667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.546 [2024-10-30 13:55:43.777987] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.546 [2024-10-30 13:55:43.778002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.546 [2024-10-30 13:55:43.791411] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.546 [2024-10-30 13:55:43.791426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.546 [2024-10-30 13:55:43.804569] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.546 [2024-10-30 13:55:43.804584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.546 [2024-10-30 13:55:43.817278] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.546 [2024-10-30 13:55:43.817292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.546 [2024-10-30 13:55:43.830890] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.546 [2024-10-30 13:55:43.830905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.546 [2024-10-30 13:55:43.844583] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.546 [2024-10-30 13:55:43.844606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.807 [2024-10-30 13:55:43.857201] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.807 [2024-10-30 13:55:43.857216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.807 [2024-10-30 13:55:43.869753] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.807 [2024-10-30 13:55:43.869768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.807 [2024-10-30 13:55:43.882546] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.807 [2024-10-30 13:55:43.882561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.807 [2024-10-30 13:55:43.895841] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.807 [2024-10-30 13:55:43.895857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.807 [2024-10-30 13:55:43.908842] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.807 [2024-10-30 13:55:43.908857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.807 [2024-10-30 13:55:43.922147] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.807 [2024-10-30 13:55:43.922162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.807 [2024-10-30 13:55:43.935247] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.807 [2024-10-30 13:55:43.935262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.807 [2024-10-30 13:55:43.948629] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.807 [2024-10-30 13:55:43.948644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.807 [2024-10-30 13:55:43.962203] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.807 [2024-10-30 13:55:43.962218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.807 [2024-10-30 13:55:43.975152] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.807 [2024-10-30 13:55:43.975167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.807 [2024-10-30 13:55:43.987659] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.807 [2024-10-30 13:55:43.987674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.807 [2024-10-30 13:55:44.000108] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.807 [2024-10-30 13:55:44.000122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.807 [2024-10-30 13:55:44.013420] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.807 [2024-10-30 13:55:44.013435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.807 [2024-10-30 13:55:44.026887] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.807 [2024-10-30 13:55:44.026902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.807 [2024-10-30 13:55:44.039773] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.807 [2024-10-30 13:55:44.039788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.807 [2024-10-30 13:55:44.052854] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.807 [2024-10-30 13:55:44.052868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.807 [2024-10-30 13:55:44.065797] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.808 [2024-10-30 13:55:44.065811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.808 [2024-10-30 13:55:44.079099] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.808 [2024-10-30 13:55:44.079114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.808 [2024-10-30 13:55:44.092505] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.808 [2024-10-30 13:55:44.092523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.808 [2024-10-30 13:55:44.105820] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.808 [2024-10-30 13:55:44.105835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.068 [2024-10-30 13:55:44.118765] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.068 [2024-10-30 13:55:44.118779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.068 [2024-10-30 13:55:44.132146] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.068 [2024-10-30 13:55:44.132160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.068 [2024-10-30 13:55:44.144955] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.068 [2024-10-30 13:55:44.144970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.068 [2024-10-30 13:55:44.158055] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.068 [2024-10-30 13:55:44.158069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.068 [2024-10-30 13:55:44.171148] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.068 [2024-10-30 13:55:44.171163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.068 [2024-10-30 13:55:44.183866] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.068 [2024-10-30 13:55:44.183880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.068 [2024-10-30 13:55:44.196486] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.068 [2024-10-30 13:55:44.196501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.068 [2024-10-30 13:55:44.209354] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.068 [2024-10-30 13:55:44.209369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.068 [2024-10-30 13:55:44.222704] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.068 [2024-10-30 13:55:44.222719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.068 [2024-10-30 13:55:44.235374] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.068 [2024-10-30 13:55:44.235389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.068 [2024-10-30 13:55:44.248707] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.068 [2024-10-30 13:55:44.248722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.068 [2024-10-30 13:55:44.261528] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.068 [2024-10-30 13:55:44.261543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.068 [2024-10-30 13:55:44.274573] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.068 [2024-10-30 13:55:44.274588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.068 [2024-10-30 13:55:44.287955] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.068 [2024-10-30 13:55:44.287969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.068 [2024-10-30 13:55:44.301404] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.068 [2024-10-30 13:55:44.301419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.068 [2024-10-30 13:55:44.314161] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.068 [2024-10-30 13:55:44.314176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.068 [2024-10-30 13:55:44.327357] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.068 [2024-10-30 13:55:44.327371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.068 [2024-10-30 13:55:44.340033] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.068 [2024-10-30 13:55:44.340051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.068 [2024-10-30 13:55:44.353207] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.068 [2024-10-30 13:55:44.353221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.068 19186.40 IOPS, 149.89 MiB/s [2024-10-30T12:55:44.367Z] [2024-10-30 13:55:44.365894] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.068 [2024-10-30 13:55:44.365909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.328 00:08:46.328 Latency(us) 00:08:46.328 [2024-10-30T12:55:44.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.328 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:46.328 Nvme1n1 : 5.01 19190.56 149.93 0.00 0.00 6664.34 2853.55 15619.41 00:08:46.328 [2024-10-30T12:55:44.627Z] =================================================================================================================== 00:08:46.328 [2024-10-30T12:55:44.627Z] Total : 19190.56 149.93 0.00 0.00 6664.34 2853.55 15619.41 00:08:46.328 [2024-10-30 13:55:44.375385] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.328 [2024-10-30 13:55:44.375398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.328 [2024-10-30 13:55:44.387399] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.328 [2024-10-30 13:55:44.387411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.328 [2024-10-30 13:55:44.399433] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.328 [2024-10-30 13:55:44.399447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.328 [2024-10-30 13:55:44.411463] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.328 [2024-10-30 13:55:44.411477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.328 [2024-10-30 13:55:44.423489] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.328 [2024-10-30 13:55:44.423499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.328 [2024-10-30 13:55:44.435519] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.328 [2024-10-30 13:55:44.435528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.328 [2024-10-30 13:55:44.447549] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.328 [2024-10-30 13:55:44.447558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.328 [2024-10-30 13:55:44.459580] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.328 [2024-10-30 13:55:44.459590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.328 [2024-10-30 13:55:44.471608] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.328 [2024-10-30 13:55:44.471616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (870239) - No such process 00:08:46.328 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 870239 00:08:46.328 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.328 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.328 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.328 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.328 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:46.328 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.328 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.328 delay0 00:08:46.328 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.328 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:46.328 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.328 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.328 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.328 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:46.588 [2024-10-30 13:55:44.684877] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:54.724 Initializing NVMe Controllers 00:08:54.724 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:54.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:54.724 Initialization complete. Launching workers. 00:08:54.724 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 240, failed: 34659 00:08:54.724 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 34781, failed to submit 118 00:08:54.724 success 34684, unsuccessful 97, failed 0 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:54.724 rmmod nvme_tcp 00:08:54.724 rmmod nvme_fabrics 00:08:54.724 rmmod nvme_keyring 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 867915 ']' 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 867915 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 867915 ']' 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 867915 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 867915 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 867915' 00:08:54.724 killing process with pid 867915 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 867915 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 867915 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.724 13:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:56.116 00:08:56.116 real 0m34.598s 00:08:56.116 user 0m45.625s 00:08:56.116 sys 0m11.875s 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.116 ************************************ 00:08:56.116 END TEST nvmf_zcopy 00:08:56.116 ************************************ 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:56.116 ************************************ 00:08:56.116 START TEST nvmf_nmic 00:08:56.116 ************************************ 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:56.116 * Looking for test storage... 00:08:56.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:56.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.116 --rc genhtml_branch_coverage=1 00:08:56.116 --rc genhtml_function_coverage=1 00:08:56.116 --rc genhtml_legend=1 00:08:56.116 --rc geninfo_all_blocks=1 00:08:56.116 --rc geninfo_unexecuted_blocks=1 00:08:56.116 00:08:56.116 ' 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:56.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.116 --rc genhtml_branch_coverage=1 00:08:56.116 --rc genhtml_function_coverage=1 00:08:56.116 --rc genhtml_legend=1 00:08:56.116 --rc geninfo_all_blocks=1 00:08:56.116 --rc geninfo_unexecuted_blocks=1 00:08:56.116 00:08:56.116 ' 00:08:56.116 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:56.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.116 --rc genhtml_branch_coverage=1 00:08:56.116 --rc genhtml_function_coverage=1 00:08:56.116 --rc genhtml_legend=1 00:08:56.117 --rc geninfo_all_blocks=1 00:08:56.117 --rc geninfo_unexecuted_blocks=1 00:08:56.117 00:08:56.117 ' 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:56.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.117 --rc genhtml_branch_coverage=1 00:08:56.117 --rc genhtml_function_coverage=1 00:08:56.117 --rc genhtml_legend=1 00:08:56.117 --rc geninfo_all_blocks=1 00:08:56.117 --rc geninfo_unexecuted_blocks=1 00:08:56.117 00:08:56.117 ' 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:56.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:56.117 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:04.260 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:04.260 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.260 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:04.261 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:04.261 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:04.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:09:04.261 00:09:04.261 --- 10.0.0.2 ping statistics --- 00:09:04.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.261 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:04.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:09:04.261 00:09:04.261 --- 10.0.0.1 ping statistics --- 00:09:04.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.261 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=876992 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 876992 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 876992 ']' 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.261 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:04.261 [2024-10-30 13:56:01.921732] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:09:04.261 [2024-10-30 13:56:01.921817] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.261 [2024-10-30 13:56:02.020166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:04.261 [2024-10-30 13:56:02.073716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.261 [2024-10-30 13:56:02.073779] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.261 [2024-10-30 13:56:02.073792] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.261 [2024-10-30 13:56:02.073802] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.261 [2024-10-30 13:56:02.073810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.261 [2024-10-30 13:56:02.076191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.261 [2024-10-30 13:56:02.076355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.261 [2024-10-30 13:56:02.076514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.261 [2024-10-30 13:56:02.076515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.522 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.522 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:04.522 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:04.522 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:04.522 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:04.523 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.523 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:04.523 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.523 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:04.523 [2024-10-30 13:56:02.778411] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:04.523 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.523 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:04.523 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.523 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:04.523 Malloc0 00:09:04.523 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.523 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:04.523 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.523 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:04.783 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.783 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:04.783 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.783 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:04.783 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.783 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:04.784 [2024-10-30 13:56:02.852839] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:04.784 test case1: single bdev can't be used in multiple subsystems 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:04.784 [2024-10-30 13:56:02.888725] bdev.c:8192:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:04.784 [2024-10-30 13:56:02.888768] subsystem.c:2151:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:04.784 [2024-10-30 13:56:02.888781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.784 request: 00:09:04.784 { 00:09:04.784 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:04.784 "namespace": { 00:09:04.784 "bdev_name": "Malloc0", 00:09:04.784 "no_auto_visible": false 00:09:04.784 }, 00:09:04.784 "method": "nvmf_subsystem_add_ns", 00:09:04.784 "req_id": 1 00:09:04.784 } 00:09:04.784 Got JSON-RPC error response 00:09:04.784 response: 00:09:04.784 { 00:09:04.784 "code": -32602, 00:09:04.784 "message": "Invalid parameters" 00:09:04.784 } 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:04.784 Adding namespace failed - expected result. 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:04.784 test case2: host connect to nvmf target in multiple paths 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:04.784 [2024-10-30 13:56:02.900939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.784 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:06.700 13:56:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:08.083 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:08.083 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:08.083 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:08.083 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:08.083 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:10.027 13:56:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:10.027 13:56:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:10.027 13:56:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:10.027 13:56:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:10.027 13:56:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:10.027 13:56:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:10.027 13:56:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:10.027 [global] 00:09:10.027 thread=1 00:09:10.027 invalidate=1 00:09:10.027 rw=write 00:09:10.027 time_based=1 00:09:10.027 runtime=1 00:09:10.027 ioengine=libaio 00:09:10.027 direct=1 00:09:10.027 bs=4096 00:09:10.027 iodepth=1 00:09:10.027 norandommap=0 00:09:10.027 numjobs=1 00:09:10.027 00:09:10.027 verify_dump=1 00:09:10.027 verify_backlog=512 00:09:10.027 verify_state_save=0 00:09:10.027 do_verify=1 00:09:10.027 verify=crc32c-intel 00:09:10.027 [job0] 00:09:10.027 filename=/dev/nvme0n1 00:09:10.027 Could not set queue depth (nvme0n1) 00:09:10.287 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.287 fio-3.35 00:09:10.287 Starting 1 thread 00:09:11.670 00:09:11.670 job0: (groupid=0, jobs=1): err= 0: pid=878536: Wed Oct 30 13:56:09 2024 00:09:11.670 read: IOPS=18, BW=73.6KiB/s (75.3kB/s)(76.0KiB/1033msec) 00:09:11.670 slat (nsec): min=7690, max=27111, avg=23800.32, stdev=5469.56 00:09:11.670 clat (usec): min=953, max=42003, avg=39686.59, stdev=9384.37 00:09:11.670 lat (usec): min=962, max=42028, avg=39710.39, stdev=9387.94 00:09:11.670 clat percentiles (usec): 00:09:11.670 | 1.00th=[ 955], 5.00th=[ 955], 10.00th=[41157], 20.00th=[41681], 00:09:11.670 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:09:11.670 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:11.670 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:11.670 | 99.99th=[42206] 00:09:11.670 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:09:11.670 slat (usec): min=9, max=29861, avg=85.08, stdev=1318.55 00:09:11.670 clat (usec): min=165, max=749, avg=451.81, stdev=97.89 00:09:11.670 lat (usec): min=177, max=30610, avg=536.89, stdev=1335.63 00:09:11.670 clat percentiles (usec): 00:09:11.670 | 1.00th=[ 231], 5.00th=[ 306], 10.00th=[ 338], 20.00th=[ 355], 00:09:11.670 | 30.00th=[ 400], 40.00th=[ 453], 50.00th=[ 469], 60.00th=[ 478], 00:09:11.670 | 70.00th=[ 486], 80.00th=[ 506], 90.00th=[ 562], 95.00th=[ 644], 00:09:11.670 | 99.00th=[ 717], 99.50th=[ 734], 99.90th=[ 750], 99.95th=[ 750], 00:09:11.670 | 99.99th=[ 750] 00:09:11.671 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:11.671 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:11.671 lat (usec) : 250=2.45%, 500=72.69%, 750=21.28%, 1000=0.19% 00:09:11.671 lat (msec) : 50=3.39% 00:09:11.671 cpu : usr=0.87%, sys=1.07%, ctx=534, majf=0, minf=1 00:09:11.671 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:11.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.671 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.671 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:11.671 00:09:11.671 Run status group 0 (all jobs): 00:09:11.671 READ: bw=73.6KiB/s (75.3kB/s), 73.6KiB/s-73.6KiB/s (75.3kB/s-75.3kB/s), io=76.0KiB (77.8kB), run=1033-1033msec 00:09:11.671 WRITE: bw=1983KiB/s (2030kB/s), 1983KiB/s-1983KiB/s (2030kB/s-2030kB/s), io=2048KiB (2097kB), run=1033-1033msec 00:09:11.671 00:09:11.671 Disk stats (read/write): 00:09:11.671 nvme0n1: ios=40/512, merge=0/0, ticks=1549/228, in_queue=1777, util=98.80% 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:11.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:11.671 rmmod nvme_tcp 00:09:11.671 rmmod nvme_fabrics 00:09:11.671 rmmod nvme_keyring 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 876992 ']' 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 876992 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 876992 ']' 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 876992 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 876992 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 876992' 00:09:11.671 killing process with pid 876992 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 876992 00:09:11.671 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 876992 00:09:11.930 13:56:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:11.930 13:56:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:11.930 13:56:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:11.930 13:56:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:11.930 13:56:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:11.930 13:56:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:11.930 13:56:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:11.930 13:56:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:11.930 13:56:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:11.930 13:56:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.930 13:56:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.930 13:56:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.839 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:13.839 00:09:13.839 real 0m17.943s 00:09:13.839 user 0m48.823s 00:09:13.839 sys 0m6.497s 00:09:13.839 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.839 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.839 ************************************ 00:09:13.839 END TEST nvmf_nmic 00:09:13.839 ************************************ 00:09:13.839 13:56:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:13.839 13:56:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:13.839 13:56:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.839 13:56:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:14.101 ************************************ 00:09:14.101 START TEST nvmf_fio_target 00:09:14.101 ************************************ 00:09:14.101 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:14.101 * Looking for test storage... 00:09:14.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:14.101 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:14.101 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:14.101 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:14.101 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:14.101 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.101 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.101 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.101 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.101 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.101 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.101 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.101 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.101 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.101 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.101 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.101 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:14.101 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:14.101 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:14.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.102 --rc genhtml_branch_coverage=1 00:09:14.102 --rc genhtml_function_coverage=1 00:09:14.102 --rc genhtml_legend=1 00:09:14.102 --rc geninfo_all_blocks=1 00:09:14.102 --rc geninfo_unexecuted_blocks=1 00:09:14.102 00:09:14.102 ' 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:14.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.102 --rc genhtml_branch_coverage=1 00:09:14.102 --rc genhtml_function_coverage=1 00:09:14.102 --rc genhtml_legend=1 00:09:14.102 --rc geninfo_all_blocks=1 00:09:14.102 --rc geninfo_unexecuted_blocks=1 00:09:14.102 00:09:14.102 ' 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:14.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.102 --rc genhtml_branch_coverage=1 00:09:14.102 --rc genhtml_function_coverage=1 00:09:14.102 --rc genhtml_legend=1 00:09:14.102 --rc geninfo_all_blocks=1 00:09:14.102 --rc geninfo_unexecuted_blocks=1 00:09:14.102 00:09:14.102 ' 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:14.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.102 --rc genhtml_branch_coverage=1 00:09:14.102 --rc genhtml_function_coverage=1 00:09:14.102 --rc genhtml_legend=1 00:09:14.102 --rc geninfo_all_blocks=1 00:09:14.102 --rc geninfo_unexecuted_blocks=1 00:09:14.102 00:09:14.102 ' 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.102 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:14.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:14.103 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:14.103 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:14.103 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:14.103 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:14.103 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:14.103 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.103 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:14.103 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:14.103 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.103 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:14.103 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:14.103 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:14.103 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.103 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.103 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.364 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:14.364 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:14.364 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:14.364 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:22.508 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:22.508 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:22.508 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:22.508 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:22.508 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:22.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:09:22.509 00:09:22.509 --- 10.0.0.2 ping statistics --- 00:09:22.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.509 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:22.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:09:22.509 00:09:22.509 --- 10.0.0.1 ping statistics --- 00:09:22.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.509 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=882910 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 882910 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 882910 ']' 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.509 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.509 [2024-10-30 13:56:19.934651] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:09:22.509 [2024-10-30 13:56:19.934723] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.509 [2024-10-30 13:56:20.035195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.509 [2024-10-30 13:56:20.093281] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.509 [2024-10-30 13:56:20.093344] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.509 [2024-10-30 13:56:20.093357] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.509 [2024-10-30 13:56:20.093366] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.509 [2024-10-30 13:56:20.093375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.509 [2024-10-30 13:56:20.095496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.509 [2024-10-30 13:56:20.095642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.509 [2024-10-30 13:56:20.095804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.509 [2024-10-30 13:56:20.095803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.509 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.509 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:22.509 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:22.509 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:22.509 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.770 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.770 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:22.770 [2024-10-30 13:56:20.975361] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.770 13:56:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:23.031 13:56:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:23.031 13:56:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:23.292 13:56:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:23.292 13:56:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:23.553 13:56:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:23.553 13:56:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:23.814 13:56:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:23.814 13:56:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:23.814 13:56:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:24.074 13:56:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:24.074 13:56:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:24.334 13:56:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:24.334 13:56:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:24.593 13:56:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:24.593 13:56:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:24.593 13:56:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:24.854 13:56:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:24.854 13:56:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:25.113 13:56:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:25.114 13:56:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:25.114 13:56:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.373 [2024-10-30 13:56:23.546677] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.373 13:56:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:25.633 13:56:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:25.633 13:56:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:27.545 13:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:27.545 13:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:27.545 13:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:27.545 13:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:27.545 13:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:27.545 13:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:29.457 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:29.457 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:29.457 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:29.457 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:29.457 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:29.457 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:29.457 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:29.457 [global] 00:09:29.457 thread=1 00:09:29.457 invalidate=1 00:09:29.457 rw=write 00:09:29.457 time_based=1 00:09:29.457 runtime=1 00:09:29.457 ioengine=libaio 00:09:29.457 direct=1 00:09:29.457 bs=4096 00:09:29.457 iodepth=1 00:09:29.457 norandommap=0 00:09:29.457 numjobs=1 00:09:29.457 00:09:29.457 verify_dump=1 00:09:29.457 verify_backlog=512 00:09:29.457 verify_state_save=0 00:09:29.457 do_verify=1 00:09:29.457 verify=crc32c-intel 00:09:29.457 [job0] 00:09:29.457 filename=/dev/nvme0n1 00:09:29.457 [job1] 00:09:29.457 filename=/dev/nvme0n2 00:09:29.457 [job2] 00:09:29.457 filename=/dev/nvme0n3 00:09:29.457 [job3] 00:09:29.457 filename=/dev/nvme0n4 00:09:29.457 Could not set queue depth (nvme0n1) 00:09:29.457 Could not set queue depth (nvme0n2) 00:09:29.457 Could not set queue depth (nvme0n3) 00:09:29.457 Could not set queue depth (nvme0n4) 00:09:29.718 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.718 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.718 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.718 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.718 fio-3.35 00:09:29.718 Starting 4 threads 00:09:31.102 00:09:31.102 job0: (groupid=0, jobs=1): err= 0: pid=884802: Wed Oct 30 13:56:29 2024 00:09:31.102 read: IOPS=853, BW=3413KiB/s (3494kB/s)(3416KiB/1001msec) 00:09:31.102 slat (nsec): min=3672, max=48457, avg=10252.63, stdev=7854.90 00:09:31.102 clat (usec): min=312, max=883, avg=681.68, stdev=70.91 00:09:31.102 lat (usec): min=318, max=909, avg=691.93, stdev=74.86 00:09:31.102 clat percentiles (usec): 00:09:31.102 | 1.00th=[ 449], 5.00th=[ 562], 10.00th=[ 603], 20.00th=[ 644], 00:09:31.102 | 30.00th=[ 660], 40.00th=[ 668], 50.00th=[ 685], 60.00th=[ 693], 00:09:31.102 | 70.00th=[ 709], 80.00th=[ 734], 90.00th=[ 766], 95.00th=[ 799], 00:09:31.102 | 99.00th=[ 832], 99.50th=[ 840], 99.90th=[ 881], 99.95th=[ 881], 00:09:31.102 | 99.99th=[ 881] 00:09:31.102 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:31.102 slat (nsec): min=5614, max=70498, avg=18384.04, stdev=13409.80 00:09:31.102 clat (usec): min=193, max=645, avg=374.18, stdev=83.65 00:09:31.102 lat (usec): min=201, max=680, avg=392.56, stdev=93.90 00:09:31.102 clat percentiles (usec): 00:09:31.102 | 1.00th=[ 243], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 289], 00:09:31.102 | 30.00th=[ 322], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 383], 00:09:31.102 | 70.00th=[ 429], 80.00th=[ 461], 90.00th=[ 494], 95.00th=[ 519], 00:09:31.102 | 99.00th=[ 570], 99.50th=[ 603], 99.90th=[ 627], 99.95th=[ 644], 00:09:31.102 | 99.99th=[ 644] 00:09:31.102 bw ( KiB/s): min= 4096, max= 4096, per=31.64%, avg=4096.00, stdev= 0.00, samples=1 00:09:31.102 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:31.102 lat (usec) : 250=0.75%, 500=49.68%, 750=43.02%, 1000=6.55% 00:09:31.102 cpu : usr=1.00%, sys=3.20%, ctx=1880, majf=0, minf=1 00:09:31.102 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.103 issued rwts: total=854,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.103 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.103 job1: (groupid=0, jobs=1): err= 0: pid=884803: Wed Oct 30 13:56:29 2024 00:09:31.103 read: IOPS=657, BW=2629KiB/s (2692kB/s)(2632KiB/1001msec) 00:09:31.103 slat (nsec): min=7102, max=61249, avg=23236.53, stdev=8571.60 00:09:31.103 clat (usec): min=557, max=1175, avg=780.66, stdev=67.72 00:09:31.103 lat (usec): min=584, max=1201, avg=803.89, stdev=69.44 00:09:31.103 clat percentiles (usec): 00:09:31.103 | 1.00th=[ 635], 5.00th=[ 668], 10.00th=[ 693], 20.00th=[ 725], 00:09:31.103 | 30.00th=[ 750], 40.00th=[ 775], 50.00th=[ 783], 60.00th=[ 799], 00:09:31.103 | 70.00th=[ 816], 80.00th=[ 832], 90.00th=[ 857], 95.00th=[ 881], 00:09:31.103 | 99.00th=[ 971], 99.50th=[ 1020], 99.90th=[ 1172], 99.95th=[ 1172], 00:09:31.103 | 99.99th=[ 1172] 00:09:31.103 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:31.103 slat (nsec): min=9803, max=65960, avg=28810.45, stdev=11070.40 00:09:31.103 clat (usec): min=112, max=622, avg=419.65, stdev=79.52 00:09:31.103 lat (usec): min=123, max=656, avg=448.46, stdev=85.38 00:09:31.103 clat percentiles (usec): 00:09:31.103 | 1.00th=[ 239], 5.00th=[ 289], 10.00th=[ 314], 20.00th=[ 338], 00:09:31.103 | 30.00th=[ 371], 40.00th=[ 412], 50.00th=[ 437], 60.00th=[ 453], 00:09:31.103 | 70.00th=[ 465], 80.00th=[ 490], 90.00th=[ 515], 95.00th=[ 537], 00:09:31.103 | 99.00th=[ 578], 99.50th=[ 594], 99.90th=[ 619], 99.95th=[ 619], 00:09:31.103 | 99.99th=[ 619] 00:09:31.103 bw ( KiB/s): min= 4096, max= 4096, per=31.64%, avg=4096.00, stdev= 0.00, samples=1 00:09:31.103 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:31.103 lat (usec) : 250=0.77%, 500=50.77%, 750=20.39%, 1000=27.82% 00:09:31.103 lat (msec) : 2=0.24% 00:09:31.103 cpu : usr=2.00%, sys=4.90%, ctx=1683, majf=0, minf=1 00:09:31.103 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.103 issued rwts: total=658,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.103 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.103 job2: (groupid=0, jobs=1): err= 0: pid=884804: Wed Oct 30 13:56:29 2024 00:09:31.103 read: IOPS=490, BW=1960KiB/s (2008kB/s)(1984KiB/1012msec) 00:09:31.103 slat (nsec): min=7211, max=46219, avg=26277.10, stdev=4559.85 00:09:31.103 clat (usec): min=536, max=41953, avg=1445.77, stdev=4435.26 00:09:31.103 lat (usec): min=562, max=41979, avg=1472.04, stdev=4434.95 00:09:31.103 clat percentiles (usec): 00:09:31.103 | 1.00th=[ 562], 5.00th=[ 676], 10.00th=[ 725], 20.00th=[ 799], 00:09:31.103 | 30.00th=[ 832], 40.00th=[ 906], 50.00th=[ 979], 60.00th=[ 1057], 00:09:31.103 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1172], 95.00th=[ 1221], 00:09:31.103 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:09:31.103 | 99.99th=[42206] 00:09:31.103 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:09:31.103 slat (nsec): min=10080, max=51390, avg=29093.49, stdev=10745.51 00:09:31.103 clat (usec): min=108, max=1017, avg=503.53, stdev=164.22 00:09:31.103 lat (usec): min=121, max=1052, avg=532.62, stdev=165.98 00:09:31.103 clat percentiles (usec): 00:09:31.103 | 1.00th=[ 221], 5.00th=[ 285], 10.00th=[ 314], 20.00th=[ 367], 00:09:31.103 | 30.00th=[ 416], 40.00th=[ 449], 50.00th=[ 474], 60.00th=[ 510], 00:09:31.103 | 70.00th=[ 553], 80.00th=[ 635], 90.00th=[ 758], 95.00th=[ 840], 00:09:31.103 | 99.00th=[ 922], 99.50th=[ 979], 99.90th=[ 1020], 99.95th=[ 1020], 00:09:31.103 | 99.99th=[ 1020] 00:09:31.103 bw ( KiB/s): min= 4096, max= 4096, per=31.64%, avg=4096.00, stdev= 0.00, samples=1 00:09:31.103 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:31.103 lat (usec) : 250=0.79%, 500=28.67%, 750=21.83%, 1000=25.50% 00:09:31.103 lat (msec) : 2=22.62%, 50=0.60% 00:09:31.103 cpu : usr=0.99%, sys=3.26%, ctx=1009, majf=0, minf=1 00:09:31.103 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.103 issued rwts: total=496,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.103 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.103 job3: (groupid=0, jobs=1): err= 0: pid=884805: Wed Oct 30 13:56:29 2024 00:09:31.103 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:31.103 slat (nsec): min=7933, max=60748, avg=26598.71, stdev=3682.25 00:09:31.103 clat (usec): min=591, max=41215, avg=1044.95, stdev=1783.05 00:09:31.103 lat (usec): min=618, max=41242, avg=1071.54, stdev=1783.06 00:09:31.103 clat percentiles (usec): 00:09:31.103 | 1.00th=[ 660], 5.00th=[ 742], 10.00th=[ 799], 20.00th=[ 873], 00:09:31.103 | 30.00th=[ 906], 40.00th=[ 947], 50.00th=[ 971], 60.00th=[ 1004], 00:09:31.103 | 70.00th=[ 1037], 80.00th=[ 1074], 90.00th=[ 1123], 95.00th=[ 1156], 00:09:31.103 | 99.00th=[ 1221], 99.50th=[ 1254], 99.90th=[41157], 99.95th=[41157], 00:09:31.103 | 99.99th=[41157] 00:09:31.103 write: IOPS=714, BW=2857KiB/s (2926kB/s)(2860KiB/1001msec); 0 zone resets 00:09:31.103 slat (nsec): min=10375, max=62642, avg=31674.73, stdev=8895.81 00:09:31.103 clat (usec): min=274, max=972, avg=585.79, stdev=118.47 00:09:31.103 lat (usec): min=286, max=1007, avg=617.46, stdev=121.86 00:09:31.103 clat percentiles (usec): 00:09:31.103 | 1.00th=[ 334], 5.00th=[ 383], 10.00th=[ 445], 20.00th=[ 478], 00:09:31.103 | 30.00th=[ 519], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 611], 00:09:31.103 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 742], 95.00th=[ 775], 00:09:31.103 | 99.00th=[ 865], 99.50th=[ 955], 99.90th=[ 971], 99.95th=[ 971], 00:09:31.103 | 99.99th=[ 971] 00:09:31.103 bw ( KiB/s): min= 4096, max= 4096, per=31.64%, avg=4096.00, stdev= 0.00, samples=1 00:09:31.103 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:31.103 lat (usec) : 500=15.48%, 750=40.59%, 1000=26.49% 00:09:31.103 lat (msec) : 2=17.36%, 50=0.08% 00:09:31.103 cpu : usr=1.90%, sys=3.70%, ctx=1228, majf=0, minf=1 00:09:31.103 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.103 issued rwts: total=512,715,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.103 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.103 00:09:31.103 Run status group 0 (all jobs): 00:09:31.103 READ: bw=9960KiB/s (10.2MB/s), 1960KiB/s-3413KiB/s (2008kB/s-3494kB/s), io=9.84MiB (10.3MB), run=1001-1012msec 00:09:31.103 WRITE: bw=12.6MiB/s (13.3MB/s), 2024KiB/s-4092KiB/s (2072kB/s-4190kB/s), io=12.8MiB (13.4MB), run=1001-1012msec 00:09:31.103 00:09:31.103 Disk stats (read/write): 00:09:31.103 nvme0n1: ios=661/1024, merge=0/0, ticks=618/378, in_queue=996, util=84.67% 00:09:31.103 nvme0n2: ios=561/913, merge=0/0, ticks=1061/378, in_queue=1439, util=88.05% 00:09:31.103 nvme0n3: ios=458/512, merge=0/0, ticks=1412/255, in_queue=1667, util=92.07% 00:09:31.103 nvme0n4: ios=523/512, merge=0/0, ticks=1114/304, in_queue=1418, util=94.22% 00:09:31.103 13:56:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:31.103 [global] 00:09:31.103 thread=1 00:09:31.103 invalidate=1 00:09:31.103 rw=randwrite 00:09:31.103 time_based=1 00:09:31.103 runtime=1 00:09:31.103 ioengine=libaio 00:09:31.103 direct=1 00:09:31.103 bs=4096 00:09:31.103 iodepth=1 00:09:31.103 norandommap=0 00:09:31.103 numjobs=1 00:09:31.103 00:09:31.103 verify_dump=1 00:09:31.103 verify_backlog=512 00:09:31.103 verify_state_save=0 00:09:31.103 do_verify=1 00:09:31.103 verify=crc32c-intel 00:09:31.103 [job0] 00:09:31.103 filename=/dev/nvme0n1 00:09:31.103 [job1] 00:09:31.103 filename=/dev/nvme0n2 00:09:31.103 [job2] 00:09:31.103 filename=/dev/nvme0n3 00:09:31.103 [job3] 00:09:31.103 filename=/dev/nvme0n4 00:09:31.103 Could not set queue depth (nvme0n1) 00:09:31.103 Could not set queue depth (nvme0n2) 00:09:31.103 Could not set queue depth (nvme0n3) 00:09:31.103 Could not set queue depth (nvme0n4) 00:09:31.364 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.364 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.364 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.364 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.364 fio-3.35 00:09:31.364 Starting 4 threads 00:09:32.746 00:09:32.746 job0: (groupid=0, jobs=1): err= 0: pid=885330: Wed Oct 30 13:56:30 2024 00:09:32.746 read: IOPS=105, BW=424KiB/s (434kB/s)(424KiB/1001msec) 00:09:32.746 slat (nsec): min=25361, max=45426, avg=26596.94, stdev=2543.05 00:09:32.746 clat (usec): min=684, max=42012, avg=6792.76, stdev=14123.07 00:09:32.746 lat (usec): min=711, max=42038, avg=6819.36, stdev=14123.01 00:09:32.746 clat percentiles (usec): 00:09:32.746 | 1.00th=[ 848], 5.00th=[ 914], 10.00th=[ 955], 20.00th=[ 1004], 00:09:32.746 | 30.00th=[ 1029], 40.00th=[ 1074], 50.00th=[ 1123], 60.00th=[ 1156], 00:09:32.746 | 70.00th=[ 1205], 80.00th=[ 1254], 90.00th=[41157], 95.00th=[41681], 00:09:32.746 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:32.746 | 99.99th=[42206] 00:09:32.746 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:32.746 slat (nsec): min=8328, max=66296, avg=28622.05, stdev=9585.68 00:09:32.746 clat (usec): min=116, max=1025, avg=504.83, stdev=174.08 00:09:32.746 lat (usec): min=125, max=1045, avg=533.45, stdev=178.04 00:09:32.746 clat percentiles (usec): 00:09:32.746 | 1.00th=[ 125], 5.00th=[ 221], 10.00th=[ 285], 20.00th=[ 343], 00:09:32.746 | 30.00th=[ 412], 40.00th=[ 465], 50.00th=[ 510], 60.00th=[ 553], 00:09:32.746 | 70.00th=[ 594], 80.00th=[ 652], 90.00th=[ 734], 95.00th=[ 807], 00:09:32.746 | 99.00th=[ 873], 99.50th=[ 947], 99.90th=[ 1029], 99.95th=[ 1029], 00:09:32.746 | 99.99th=[ 1029] 00:09:32.746 bw ( KiB/s): min= 4096, max= 4096, per=40.19%, avg=4096.00, stdev= 0.00, samples=1 00:09:32.746 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:32.746 lat (usec) : 250=5.02%, 500=34.47%, 750=36.08%, 1000=10.36% 00:09:32.746 lat (msec) : 2=11.65%, 50=2.43% 00:09:32.746 cpu : usr=1.20%, sys=2.30%, ctx=619, majf=0, minf=1 00:09:32.746 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.746 issued rwts: total=106,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.746 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.746 job1: (groupid=0, jobs=1): err= 0: pid=885331: Wed Oct 30 13:56:30 2024 00:09:32.746 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:32.746 slat (nsec): min=7918, max=44540, avg=27777.96, stdev=1874.52 00:09:32.746 clat (usec): min=560, max=1164, avg=966.19, stdev=74.40 00:09:32.746 lat (usec): min=589, max=1191, avg=993.97, stdev=74.49 00:09:32.746 clat percentiles (usec): 00:09:32.746 | 1.00th=[ 693], 5.00th=[ 840], 10.00th=[ 881], 20.00th=[ 922], 00:09:32.746 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 971], 60.00th=[ 988], 00:09:32.746 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1045], 95.00th=[ 1074], 00:09:32.746 | 99.00th=[ 1123], 99.50th=[ 1139], 99.90th=[ 1172], 99.95th=[ 1172], 00:09:32.746 | 99.99th=[ 1172] 00:09:32.746 write: IOPS=804, BW=3217KiB/s (3294kB/s)(3220KiB/1001msec); 0 zone resets 00:09:32.746 slat (nsec): min=9303, max=96693, avg=28911.48, stdev=11543.52 00:09:32.746 clat (usec): min=238, max=1171, avg=568.60, stdev=129.70 00:09:32.747 lat (usec): min=263, max=1180, avg=597.51, stdev=135.32 00:09:32.747 clat percentiles (usec): 00:09:32.747 | 1.00th=[ 269], 5.00th=[ 318], 10.00th=[ 379], 20.00th=[ 461], 00:09:32.747 | 30.00th=[ 510], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 611], 00:09:32.747 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 725], 95.00th=[ 758], 00:09:32.747 | 99.00th=[ 824], 99.50th=[ 832], 99.90th=[ 1172], 99.95th=[ 1172], 00:09:32.747 | 99.99th=[ 1172] 00:09:32.747 bw ( KiB/s): min= 4096, max= 4096, per=40.19%, avg=4096.00, stdev= 0.00, samples=1 00:09:32.747 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:32.747 lat (usec) : 250=0.08%, 500=17.16%, 750=40.77%, 1000=29.38% 00:09:32.747 lat (msec) : 2=12.60% 00:09:32.747 cpu : usr=2.50%, sys=5.00%, ctx=1321, majf=0, minf=1 00:09:32.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.747 issued rwts: total=512,805,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.747 job2: (groupid=0, jobs=1): err= 0: pid=885332: Wed Oct 30 13:56:30 2024 00:09:32.747 read: IOPS=136, BW=544KiB/s (557kB/s)(560KiB/1029msec) 00:09:32.747 slat (nsec): min=6746, max=57142, avg=23438.24, stdev=6759.06 00:09:32.747 clat (usec): min=719, max=41901, avg=5597.15, stdev=12805.39 00:09:32.747 lat (usec): min=744, max=41928, avg=5620.58, stdev=12806.50 00:09:32.747 clat percentiles (usec): 00:09:32.747 | 1.00th=[ 725], 5.00th=[ 799], 10.00th=[ 848], 20.00th=[ 914], 00:09:32.747 | 30.00th=[ 963], 40.00th=[ 996], 50.00th=[ 1020], 60.00th=[ 1074], 00:09:32.747 | 70.00th=[ 1106], 80.00th=[ 1156], 90.00th=[41157], 95.00th=[41157], 00:09:32.747 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:32.747 | 99.99th=[41681] 00:09:32.747 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:09:32.747 slat (nsec): min=9542, max=62964, avg=28313.19, stdev=9779.58 00:09:32.747 clat (usec): min=171, max=647, avg=434.79, stdev=85.39 00:09:32.747 lat (usec): min=181, max=680, avg=463.10, stdev=89.48 00:09:32.747 clat percentiles (usec): 00:09:32.747 | 1.00th=[ 237], 5.00th=[ 289], 10.00th=[ 318], 20.00th=[ 355], 00:09:32.747 | 30.00th=[ 388], 40.00th=[ 424], 50.00th=[ 449], 60.00th=[ 469], 00:09:32.747 | 70.00th=[ 486], 80.00th=[ 510], 90.00th=[ 537], 95.00th=[ 553], 00:09:32.747 | 99.00th=[ 603], 99.50th=[ 619], 99.90th=[ 652], 99.95th=[ 652], 00:09:32.747 | 99.99th=[ 652] 00:09:32.747 bw ( KiB/s): min= 4096, max= 4096, per=40.19%, avg=4096.00, stdev= 0.00, samples=1 00:09:32.747 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:32.747 lat (usec) : 250=1.23%, 500=58.44%, 750=19.17%, 1000=8.59% 00:09:32.747 lat (msec) : 2=10.12%, 50=2.45% 00:09:32.747 cpu : usr=1.07%, sys=1.56%, ctx=652, majf=0, minf=1 00:09:32.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.747 issued rwts: total=140,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.747 job3: (groupid=0, jobs=1): err= 0: pid=885333: Wed Oct 30 13:56:30 2024 00:09:32.747 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:32.747 slat (nsec): min=6401, max=54856, avg=24980.88, stdev=6310.30 00:09:32.747 clat (usec): min=324, max=41732, avg=1050.32, stdev=3107.78 00:09:32.747 lat (usec): min=350, max=41758, avg=1075.30, stdev=3107.97 00:09:32.747 clat percentiles (usec): 00:09:32.747 | 1.00th=[ 420], 5.00th=[ 486], 10.00th=[ 537], 20.00th=[ 570], 00:09:32.747 | 30.00th=[ 660], 40.00th=[ 758], 50.00th=[ 807], 60.00th=[ 873], 00:09:32.747 | 70.00th=[ 979], 80.00th=[ 1037], 90.00th=[ 1106], 95.00th=[ 1156], 00:09:32.747 | 99.00th=[ 1303], 99.50th=[40633], 99.90th=[41681], 99.95th=[41681], 00:09:32.747 | 99.99th=[41681] 00:09:32.747 write: IOPS=792, BW=3169KiB/s (3245kB/s)(3172KiB/1001msec); 0 zone resets 00:09:32.747 slat (nsec): min=9808, max=53986, avg=28800.69, stdev=9730.24 00:09:32.747 clat (usec): min=232, max=904, avg=525.79, stdev=143.97 00:09:32.747 lat (usec): min=243, max=938, avg=554.59, stdev=147.54 00:09:32.747 clat percentiles (usec): 00:09:32.747 | 1.00th=[ 251], 5.00th=[ 297], 10.00th=[ 334], 20.00th=[ 383], 00:09:32.747 | 30.00th=[ 441], 40.00th=[ 478], 50.00th=[ 529], 60.00th=[ 570], 00:09:32.747 | 70.00th=[ 611], 80.00th=[ 660], 90.00th=[ 717], 95.00th=[ 758], 00:09:32.747 | 99.00th=[ 848], 99.50th=[ 857], 99.90th=[ 906], 99.95th=[ 906], 00:09:32.747 | 99.99th=[ 906] 00:09:32.747 bw ( KiB/s): min= 4096, max= 4096, per=40.19%, avg=4096.00, stdev= 0.00, samples=1 00:09:32.747 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:32.747 lat (usec) : 250=0.46%, 500=28.97%, 750=43.07%, 1000=17.01% 00:09:32.747 lat (msec) : 2=10.27%, 50=0.23% 00:09:32.747 cpu : usr=1.60%, sys=3.90%, ctx=1306, majf=0, minf=1 00:09:32.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.747 issued rwts: total=512,793,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.747 00:09:32.747 Run status group 0 (all jobs): 00:09:32.747 READ: bw=4937KiB/s (5055kB/s), 424KiB/s-2046KiB/s (434kB/s-2095kB/s), io=5080KiB (5202kB), run=1001-1029msec 00:09:32.747 WRITE: bw=9.95MiB/s (10.4MB/s), 1990KiB/s-3217KiB/s (2038kB/s-3294kB/s), io=10.2MiB (10.7MB), run=1001-1029msec 00:09:32.747 00:09:32.747 Disk stats (read/write): 00:09:32.747 nvme0n1: ios=113/512, merge=0/0, ticks=602/197, in_queue=799, util=86.97% 00:09:32.747 nvme0n2: ios=561/542, merge=0/0, ticks=959/241, in_queue=1200, util=88.28% 00:09:32.747 nvme0n3: ios=192/512, merge=0/0, ticks=663/217, in_queue=880, util=95.26% 00:09:32.747 nvme0n4: ios=569/525, merge=0/0, ticks=1188/255, in_queue=1443, util=94.13% 00:09:32.747 13:56:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:32.747 [global] 00:09:32.747 thread=1 00:09:32.747 invalidate=1 00:09:32.747 rw=write 00:09:32.747 time_based=1 00:09:32.747 runtime=1 00:09:32.747 ioengine=libaio 00:09:32.747 direct=1 00:09:32.747 bs=4096 00:09:32.747 iodepth=128 00:09:32.747 norandommap=0 00:09:32.747 numjobs=1 00:09:32.747 00:09:32.747 verify_dump=1 00:09:32.747 verify_backlog=512 00:09:32.747 verify_state_save=0 00:09:32.747 do_verify=1 00:09:32.747 verify=crc32c-intel 00:09:32.747 [job0] 00:09:32.747 filename=/dev/nvme0n1 00:09:32.747 [job1] 00:09:32.747 filename=/dev/nvme0n2 00:09:32.747 [job2] 00:09:32.747 filename=/dev/nvme0n3 00:09:32.747 [job3] 00:09:32.747 filename=/dev/nvme0n4 00:09:32.747 Could not set queue depth (nvme0n1) 00:09:32.747 Could not set queue depth (nvme0n2) 00:09:32.747 Could not set queue depth (nvme0n3) 00:09:32.747 Could not set queue depth (nvme0n4) 00:09:33.008 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:33.008 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:33.008 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:33.008 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:33.008 fio-3.35 00:09:33.008 Starting 4 threads 00:09:34.390 00:09:34.390 job0: (groupid=0, jobs=1): err= 0: pid=885872: Wed Oct 30 13:56:32 2024 00:09:34.390 read: IOPS=7748, BW=30.3MiB/s (31.7MB/s)(30.5MiB/1007msec) 00:09:34.390 slat (nsec): min=948, max=10724k, avg=56233.83, stdev=451442.94 00:09:34.390 clat (usec): min=1495, max=29117, avg=8115.17, stdev=2972.75 00:09:34.391 lat (usec): min=1505, max=29125, avg=8171.41, stdev=3006.08 00:09:34.391 clat percentiles (usec): 00:09:34.391 | 1.00th=[ 2409], 5.00th=[ 4555], 10.00th=[ 5538], 20.00th=[ 5932], 00:09:34.391 | 30.00th=[ 6587], 40.00th=[ 7111], 50.00th=[ 7570], 60.00th=[ 7832], 00:09:34.391 | 70.00th=[ 8291], 80.00th=[10421], 90.00th=[11863], 95.00th=[14091], 00:09:34.391 | 99.00th=[17957], 99.50th=[19792], 99.90th=[24773], 99.95th=[29230], 00:09:34.391 | 99.99th=[29230] 00:09:34.391 write: IOPS=8135, BW=31.8MiB/s (33.3MB/s)(32.0MiB/1007msec); 0 zone resets 00:09:34.391 slat (nsec): min=1627, max=8934.4k, avg=55359.76, stdev=395576.00 00:09:34.391 clat (usec): min=546, max=29168, avg=7863.02, stdev=4604.22 00:09:34.391 lat (usec): min=555, max=29171, avg=7918.38, stdev=4636.41 00:09:34.391 clat percentiles (usec): 00:09:34.391 | 1.00th=[ 1303], 5.00th=[ 3425], 10.00th=[ 3949], 20.00th=[ 4883], 00:09:34.391 | 30.00th=[ 5538], 40.00th=[ 5997], 50.00th=[ 6718], 60.00th=[ 7439], 00:09:34.391 | 70.00th=[ 7963], 80.00th=[10290], 90.00th=[12387], 95.00th=[19006], 00:09:34.391 | 99.00th=[28443], 99.50th=[28705], 99.90th=[29230], 99.95th=[29230], 00:09:34.391 | 99.99th=[29230] 00:09:34.391 bw ( KiB/s): min=28640, max=36864, per=35.69%, avg=32752.00, stdev=5815.25, samples=2 00:09:34.391 iops : min= 7160, max= 9216, avg=8188.00, stdev=1453.81, samples=2 00:09:34.391 lat (usec) : 750=0.04%, 1000=0.07% 00:09:34.391 lat (msec) : 2=1.12%, 4=6.28%, 10=70.63%, 20=19.92%, 50=1.94% 00:09:34.391 cpu : usr=5.77%, sys=10.04%, ctx=514, majf=0, minf=2 00:09:34.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:34.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:34.391 issued rwts: total=7803,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.391 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:34.391 job1: (groupid=0, jobs=1): err= 0: pid=885873: Wed Oct 30 13:56:32 2024 00:09:34.391 read: IOPS=7379, BW=28.8MiB/s (30.2MB/s)(29.0MiB/1006msec) 00:09:34.391 slat (nsec): min=942, max=18313k, avg=59864.21, stdev=505442.66 00:09:34.391 clat (usec): min=1422, max=54161, avg=8750.42, stdev=5408.16 00:09:34.391 lat (usec): min=1429, max=54169, avg=8810.29, stdev=5446.50 00:09:34.391 clat percentiles (usec): 00:09:34.391 | 1.00th=[ 2868], 5.00th=[ 4359], 10.00th=[ 5014], 20.00th=[ 5735], 00:09:34.391 | 30.00th=[ 6456], 40.00th=[ 6980], 50.00th=[ 7242], 60.00th=[ 8029], 00:09:34.391 | 70.00th=[ 8848], 80.00th=[10028], 90.00th=[12911], 95.00th=[19268], 00:09:34.391 | 99.00th=[34341], 99.50th=[42206], 99.90th=[50070], 99.95th=[54264], 00:09:34.391 | 99.99th=[54264] 00:09:34.391 write: IOPS=7634, BW=29.8MiB/s (31.3MB/s)(30.0MiB/1006msec); 0 zone resets 00:09:34.391 slat (nsec): min=1639, max=17646k, avg=58046.86, stdev=511777.44 00:09:34.391 clat (usec): min=1486, max=54133, avg=7958.09, stdev=6923.06 00:09:34.391 lat (usec): min=1494, max=54136, avg=8016.14, stdev=6961.16 00:09:34.391 clat percentiles (usec): 00:09:34.391 | 1.00th=[ 2573], 5.00th=[ 3425], 10.00th=[ 3720], 20.00th=[ 4424], 00:09:34.391 | 30.00th=[ 5211], 40.00th=[ 5735], 50.00th=[ 6521], 60.00th=[ 6718], 00:09:34.391 | 70.00th=[ 7177], 80.00th=[ 7701], 90.00th=[12518], 95.00th=[20579], 00:09:34.391 | 99.00th=[44827], 99.50th=[50070], 99.90th=[51643], 99.95th=[51643], 00:09:34.391 | 99.99th=[54264] 00:09:34.391 bw ( KiB/s): min=30248, max=31192, per=33.47%, avg=30720.00, stdev=667.51, samples=2 00:09:34.391 iops : min= 7562, max= 7798, avg=7680.00, stdev=166.88, samples=2 00:09:34.391 lat (msec) : 2=0.42%, 4=8.80%, 10=72.11%, 20=13.95%, 50=4.41% 00:09:34.391 lat (msec) : 100=0.30% 00:09:34.391 cpu : usr=5.17%, sys=7.96%, ctx=454, majf=0, minf=1 00:09:34.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:34.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:34.391 issued rwts: total=7424,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.391 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:34.391 job2: (groupid=0, jobs=1): err= 0: pid=885874: Wed Oct 30 13:56:32 2024 00:09:34.391 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:09:34.391 slat (nsec): min=953, max=14323k, avg=131577.40, stdev=799537.74 00:09:34.391 clat (usec): min=4804, max=62145, avg=19677.56, stdev=10766.71 00:09:34.391 lat (usec): min=4813, max=62150, avg=19809.14, stdev=10809.33 00:09:34.391 clat percentiles (usec): 00:09:34.391 | 1.00th=[ 6259], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[11469], 00:09:34.391 | 30.00th=[11994], 40.00th=[14353], 50.00th=[17171], 60.00th=[19268], 00:09:34.391 | 70.00th=[22414], 80.00th=[27657], 90.00th=[30540], 95.00th=[47973], 00:09:34.391 | 99.00th=[56886], 99.50th=[62129], 99.90th=[62129], 99.95th=[62129], 00:09:34.391 | 99.99th=[62129] 00:09:34.391 write: IOPS=3118, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1006msec); 0 zone resets 00:09:34.391 slat (nsec): min=1833, max=16382k, avg=165191.13, stdev=1006583.46 00:09:34.391 clat (usec): min=1249, max=66429, avg=21414.85, stdev=14590.93 00:09:34.391 lat (usec): min=1260, max=66440, avg=21580.04, stdev=14668.99 00:09:34.391 clat percentiles (usec): 00:09:34.391 | 1.00th=[ 2278], 5.00th=[ 6456], 10.00th=[ 7504], 20.00th=[ 8291], 00:09:34.391 | 30.00th=[10028], 40.00th=[12387], 50.00th=[15926], 60.00th=[22152], 00:09:34.391 | 70.00th=[30802], 80.00th=[33817], 90.00th=[41681], 95.00th=[51119], 00:09:34.391 | 99.00th=[59507], 99.50th=[62653], 99.90th=[66323], 99.95th=[66323], 00:09:34.391 | 99.99th=[66323] 00:09:34.391 bw ( KiB/s): min= 8944, max=15632, per=13.39%, avg=12288.00, stdev=4729.13, samples=2 00:09:34.391 iops : min= 2236, max= 3908, avg=3072.00, stdev=1182.28, samples=2 00:09:34.391 lat (msec) : 2=0.24%, 4=0.95%, 10=19.28%, 20=40.12%, 50=34.16% 00:09:34.391 lat (msec) : 100=5.25% 00:09:34.391 cpu : usr=2.19%, sys=4.08%, ctx=276, majf=0, minf=2 00:09:34.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:34.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:34.391 issued rwts: total=3072,3137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.391 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:34.391 job3: (groupid=0, jobs=1): err= 0: pid=885875: Wed Oct 30 13:56:32 2024 00:09:34.391 read: IOPS=3809, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1007msec) 00:09:34.391 slat (nsec): min=969, max=25746k, avg=148057.01, stdev=1083332.96 00:09:34.391 clat (usec): min=5518, max=59619, avg=18496.49, stdev=10583.16 00:09:34.391 lat (usec): min=7139, max=59647, avg=18644.54, stdev=10682.75 00:09:34.391 clat percentiles (usec): 00:09:34.391 | 1.00th=[ 7963], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[10159], 00:09:34.391 | 30.00th=[10814], 40.00th=[11863], 50.00th=[12518], 60.00th=[16909], 00:09:34.391 | 70.00th=[21890], 80.00th=[28443], 90.00th=[35390], 95.00th=[40109], 00:09:34.391 | 99.00th=[45351], 99.50th=[45351], 99.90th=[51119], 99.95th=[58459], 00:09:34.391 | 99.99th=[59507] 00:09:34.391 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:09:34.391 slat (nsec): min=1753, max=9872.2k, avg=100007.72, stdev=582396.78 00:09:34.391 clat (usec): min=4522, max=43576, avg=13580.70, stdev=6576.91 00:09:34.391 lat (usec): min=4533, max=44292, avg=13680.70, stdev=6622.43 00:09:34.391 clat percentiles (usec): 00:09:34.391 | 1.00th=[ 6980], 5.00th=[ 7635], 10.00th=[ 7898], 20.00th=[ 8029], 00:09:34.391 | 30.00th=[ 8455], 40.00th=[ 9896], 50.00th=[10552], 60.00th=[12911], 00:09:34.391 | 70.00th=[16909], 80.00th=[19268], 90.00th=[21890], 95.00th=[28443], 00:09:34.391 | 99.00th=[35390], 99.50th=[38011], 99.90th=[39584], 99.95th=[40109], 00:09:34.391 | 99.99th=[43779] 00:09:34.391 bw ( KiB/s): min=16384, max=16384, per=17.85%, avg=16384.00, stdev= 0.00, samples=2 00:09:34.391 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:34.391 lat (msec) : 10=30.90%, 20=44.49%, 50=24.56%, 100=0.05% 00:09:34.391 cpu : usr=2.88%, sys=4.87%, ctx=306, majf=0, minf=1 00:09:34.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:34.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:34.391 issued rwts: total=3836,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.391 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:34.391 00:09:34.391 Run status group 0 (all jobs): 00:09:34.391 READ: bw=85.9MiB/s (90.0MB/s), 11.9MiB/s-30.3MiB/s (12.5MB/s-31.7MB/s), io=86.5MiB (90.7MB), run=1006-1007msec 00:09:34.391 WRITE: bw=89.6MiB/s (94.0MB/s), 12.2MiB/s-31.8MiB/s (12.8MB/s-33.3MB/s), io=90.3MiB (94.6MB), run=1006-1007msec 00:09:34.391 00:09:34.391 Disk stats (read/write): 00:09:34.391 nvme0n1: ios=7205/7175, merge=0/0, ticks=46673/39895, in_queue=86568, util=85.37% 00:09:34.391 nvme0n2: ios=5747/6144, merge=0/0, ticks=47352/42495, in_queue=89847, util=90.83% 00:09:34.391 nvme0n3: ios=2621/2912, merge=0/0, ticks=29602/45017, in_queue=74619, util=92.72% 00:09:34.391 nvme0n4: ios=3513/3584, merge=0/0, ticks=31023/19720, in_queue=50743, util=96.58% 00:09:34.391 13:56:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:34.391 [global] 00:09:34.391 thread=1 00:09:34.391 invalidate=1 00:09:34.391 rw=randwrite 00:09:34.391 time_based=1 00:09:34.391 runtime=1 00:09:34.391 ioengine=libaio 00:09:34.391 direct=1 00:09:34.391 bs=4096 00:09:34.391 iodepth=128 00:09:34.391 norandommap=0 00:09:34.391 numjobs=1 00:09:34.391 00:09:34.391 verify_dump=1 00:09:34.391 verify_backlog=512 00:09:34.391 verify_state_save=0 00:09:34.391 do_verify=1 00:09:34.391 verify=crc32c-intel 00:09:34.391 [job0] 00:09:34.391 filename=/dev/nvme0n1 00:09:34.391 [job1] 00:09:34.391 filename=/dev/nvme0n2 00:09:34.391 [job2] 00:09:34.391 filename=/dev/nvme0n3 00:09:34.391 [job3] 00:09:34.391 filename=/dev/nvme0n4 00:09:34.391 Could not set queue depth (nvme0n1) 00:09:34.391 Could not set queue depth (nvme0n2) 00:09:34.391 Could not set queue depth (nvme0n3) 00:09:34.391 Could not set queue depth (nvme0n4) 00:09:34.651 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:34.651 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:34.651 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:34.651 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:34.651 fio-3.35 00:09:34.651 Starting 4 threads 00:09:36.036 00:09:36.036 job0: (groupid=0, jobs=1): err= 0: pid=886393: Wed Oct 30 13:56:34 2024 00:09:36.036 read: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec) 00:09:36.036 slat (nsec): min=876, max=13202k, avg=68114.23, stdev=456247.58 00:09:36.036 clat (usec): min=3404, max=27022, avg=8703.84, stdev=3145.09 00:09:36.036 lat (usec): min=3406, max=30286, avg=8771.95, stdev=3178.97 00:09:36.036 clat percentiles (usec): 00:09:36.036 | 1.00th=[ 4817], 5.00th=[ 5997], 10.00th=[ 6652], 20.00th=[ 7111], 00:09:36.036 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 8029], 00:09:36.036 | 70.00th=[ 8455], 80.00th=[ 9503], 90.00th=[12125], 95.00th=[15664], 00:09:36.036 | 99.00th=[22676], 99.50th=[25297], 99.90th=[26346], 99.95th=[27132], 00:09:36.036 | 99.99th=[27132] 00:09:36.036 write: IOPS=7697, BW=30.1MiB/s (31.5MB/s)(30.2MiB/1003msec); 0 zone resets 00:09:36.036 slat (nsec): min=1481, max=5194.3k, avg=57770.45, stdev=327008.12 00:09:36.036 clat (usec): min=2658, max=16212, avg=7793.73, stdev=1777.58 00:09:36.036 lat (usec): min=2669, max=16942, avg=7851.50, stdev=1792.49 00:09:36.036 clat percentiles (usec): 00:09:36.036 | 1.00th=[ 4293], 5.00th=[ 5211], 10.00th=[ 6259], 20.00th=[ 6783], 00:09:36.036 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7308], 60.00th=[ 7570], 00:09:36.036 | 70.00th=[ 8094], 80.00th=[ 8979], 90.00th=[10421], 95.00th=[11076], 00:09:36.036 | 99.00th=[12780], 99.50th=[14484], 99.90th=[16188], 99.95th=[16188], 00:09:36.036 | 99.99th=[16188] 00:09:36.036 bw ( KiB/s): min=28672, max=32768, per=30.04%, avg=30720.00, stdev=2896.31, samples=2 00:09:36.036 iops : min= 7168, max= 8192, avg=7680.00, stdev=724.08, samples=2 00:09:36.036 lat (msec) : 4=0.47%, 10=84.62%, 20=14.09%, 50=0.82% 00:09:36.036 cpu : usr=4.49%, sys=5.39%, ctx=797, majf=0, minf=1 00:09:36.036 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:36.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.036 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.036 issued rwts: total=7680,7721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.037 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.037 job1: (groupid=0, jobs=1): err= 0: pid=886394: Wed Oct 30 13:56:34 2024 00:09:36.037 read: IOPS=7391, BW=28.9MiB/s (30.3MB/s)(29.0MiB/1003msec) 00:09:36.037 slat (nsec): min=904, max=7167.9k, avg=66140.25, stdev=456820.04 00:09:36.037 clat (usec): min=1933, max=21397, avg=8681.02, stdev=2193.33 00:09:36.037 lat (usec): min=2617, max=21399, avg=8747.16, stdev=2226.56 00:09:36.037 clat percentiles (usec): 00:09:36.037 | 1.00th=[ 4621], 5.00th=[ 6194], 10.00th=[ 6915], 20.00th=[ 7308], 00:09:36.037 | 30.00th=[ 7701], 40.00th=[ 7898], 50.00th=[ 8225], 60.00th=[ 8717], 00:09:36.037 | 70.00th=[ 8979], 80.00th=[ 9765], 90.00th=[11076], 95.00th=[12649], 00:09:36.037 | 99.00th=[17433], 99.50th=[18220], 99.90th=[20579], 99.95th=[21365], 00:09:36.037 | 99.99th=[21365] 00:09:36.037 write: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec); 0 zone resets 00:09:36.037 slat (nsec): min=1492, max=6804.2k, avg=59353.02, stdev=381850.98 00:09:36.037 clat (usec): min=2588, max=21396, avg=8157.58, stdev=2431.62 00:09:36.037 lat (usec): min=2595, max=21398, avg=8216.93, stdev=2453.91 00:09:36.037 clat percentiles (usec): 00:09:36.037 | 1.00th=[ 3949], 5.00th=[ 4621], 10.00th=[ 5342], 20.00th=[ 6456], 00:09:36.037 | 30.00th=[ 7046], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8160], 00:09:36.037 | 70.00th=[ 8586], 80.00th=[ 9241], 90.00th=[12256], 95.00th=[13829], 00:09:36.037 | 99.00th=[14746], 99.50th=[14877], 99.90th=[15008], 99.95th=[15008], 00:09:36.037 | 99.99th=[21365] 00:09:36.037 bw ( KiB/s): min=30512, max=30928, per=30.04%, avg=30720.00, stdev=294.16, samples=2 00:09:36.037 iops : min= 7628, max= 7732, avg=7680.00, stdev=73.54, samples=2 00:09:36.037 lat (msec) : 2=0.01%, 4=1.02%, 10=82.31%, 20=16.56%, 50=0.10% 00:09:36.037 cpu : usr=6.29%, sys=6.89%, ctx=540, majf=0, minf=1 00:09:36.037 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:36.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.037 issued rwts: total=7414,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.037 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.037 job2: (groupid=0, jobs=1): err= 0: pid=886396: Wed Oct 30 13:56:34 2024 00:09:36.037 read: IOPS=5739, BW=22.4MiB/s (23.5MB/s)(22.5MiB/1003msec) 00:09:36.037 slat (nsec): min=951, max=9417.1k, avg=79043.76, stdev=451695.94 00:09:36.037 clat (usec): min=1422, max=49721, avg=10326.70, stdev=4478.50 00:09:36.037 lat (usec): min=4879, max=49724, avg=10405.75, stdev=4497.40 00:09:36.037 clat percentiles (usec): 00:09:36.037 | 1.00th=[ 6194], 5.00th=[ 8029], 10.00th=[ 8455], 20.00th=[ 8979], 00:09:36.037 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9765], 00:09:36.037 | 70.00th=[10028], 80.00th=[10683], 90.00th=[11863], 95.00th=[12911], 00:09:36.037 | 99.00th=[45876], 99.50th=[45876], 99.90th=[49546], 99.95th=[49546], 00:09:36.037 | 99.99th=[49546] 00:09:36.037 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:09:36.037 slat (nsec): min=1591, max=38587k, avg=85044.35, stdev=762799.37 00:09:36.037 clat (usec): min=1202, max=46983, avg=11007.63, stdev=6366.35 00:09:36.037 lat (usec): min=1212, max=49132, avg=11092.68, stdev=6411.21 00:09:36.037 clat percentiles (usec): 00:09:36.037 | 1.00th=[ 6456], 5.00th=[ 7439], 10.00th=[ 7898], 20.00th=[ 8094], 00:09:36.037 | 30.00th=[ 8356], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9241], 00:09:36.037 | 70.00th=[ 9896], 80.00th=[11076], 90.00th=[16450], 95.00th=[29492], 00:09:36.037 | 99.00th=[36963], 99.50th=[45876], 99.90th=[46924], 99.95th=[46924], 00:09:36.037 | 99.99th=[46924] 00:09:36.037 bw ( KiB/s): min=20480, max=28656, per=24.03%, avg=24568.00, stdev=5781.31, samples=2 00:09:36.037 iops : min= 5120, max= 7164, avg=6142.00, stdev=1445.33, samples=2 00:09:36.037 lat (msec) : 2=0.08%, 10=69.57%, 20=26.32%, 50=4.03% 00:09:36.037 cpu : usr=2.99%, sys=4.89%, ctx=674, majf=0, minf=2 00:09:36.037 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:36.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.037 issued rwts: total=5757,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.037 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.037 job3: (groupid=0, jobs=1): err= 0: pid=886398: Wed Oct 30 13:56:34 2024 00:09:36.037 read: IOPS=4076, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:09:36.037 slat (nsec): min=974, max=24945k, avg=163413.52, stdev=1341264.22 00:09:36.037 clat (usec): min=1388, max=74280, avg=19954.59, stdev=20377.21 00:09:36.037 lat (usec): min=2713, max=74287, avg=20118.00, stdev=20505.38 00:09:36.037 clat percentiles (usec): 00:09:36.037 | 1.00th=[ 5800], 5.00th=[ 6521], 10.00th=[ 6915], 20.00th=[ 8160], 00:09:36.037 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10945], 60.00th=[11731], 00:09:36.037 | 70.00th=[13698], 80.00th=[21103], 90.00th=[64226], 95.00th=[67634], 00:09:36.037 | 99.00th=[73925], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:09:36.037 | 99.99th=[73925] 00:09:36.037 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:09:36.037 slat (nsec): min=1611, max=7955.5k, avg=75328.42, stdev=448301.11 00:09:36.037 clat (usec): min=1156, max=71850, avg=11099.95, stdev=8351.22 00:09:36.037 lat (usec): min=1166, max=71855, avg=11175.28, stdev=8390.11 00:09:36.037 clat percentiles (usec): 00:09:36.037 | 1.00th=[ 2442], 5.00th=[ 4490], 10.00th=[ 5342], 20.00th=[ 7046], 00:09:36.037 | 30.00th=[ 8029], 40.00th=[ 8717], 50.00th=[ 9634], 60.00th=[10290], 00:09:36.037 | 70.00th=[11863], 80.00th=[13304], 90.00th=[14746], 95.00th=[16909], 00:09:36.037 | 99.00th=[63701], 99.50th=[65799], 99.90th=[71828], 99.95th=[71828], 00:09:36.037 | 99.99th=[71828] 00:09:36.037 bw ( KiB/s): min= 8192, max=24576, per=16.02%, avg=16384.00, stdev=11585.24, samples=2 00:09:36.037 iops : min= 2048, max= 6144, avg=4096.00, stdev=2896.31, samples=2 00:09:36.037 lat (msec) : 2=0.33%, 4=1.75%, 10=46.33%, 20=39.34%, 50=4.15% 00:09:36.037 lat (msec) : 100=8.10% 00:09:36.037 cpu : usr=2.79%, sys=4.09%, ctx=366, majf=0, minf=1 00:09:36.037 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:36.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.037 issued rwts: total=4089,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.037 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.037 00:09:36.037 Run status group 0 (all jobs): 00:09:36.037 READ: bw=97.1MiB/s (102MB/s), 15.9MiB/s-29.9MiB/s (16.7MB/s-31.4MB/s), io=97.4MiB (102MB), run=1003-1003msec 00:09:36.037 WRITE: bw=99.9MiB/s (105MB/s), 16.0MiB/s-30.1MiB/s (16.7MB/s-31.5MB/s), io=100MiB (105MB), run=1003-1003msec 00:09:36.037 00:09:36.037 Disk stats (read/write): 00:09:36.037 nvme0n1: ios=6194/6567, merge=0/0, ticks=23714/21421, in_queue=45135, util=86.27% 00:09:36.037 nvme0n2: ios=6172/6371, merge=0/0, ticks=39388/35719, in_queue=75107, util=90.93% 00:09:36.037 nvme0n3: ios=4649/5107, merge=0/0, ticks=16182/17486, in_queue=33668, util=96.62% 00:09:36.037 nvme0n4: ios=3366/3584, merge=0/0, ticks=32464/23507, in_queue=55971, util=98.08% 00:09:36.037 13:56:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:36.037 13:56:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=886684 00:09:36.037 13:56:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:36.037 13:56:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:36.037 [global] 00:09:36.037 thread=1 00:09:36.037 invalidate=1 00:09:36.037 rw=read 00:09:36.037 time_based=1 00:09:36.037 runtime=10 00:09:36.037 ioengine=libaio 00:09:36.037 direct=1 00:09:36.037 bs=4096 00:09:36.037 iodepth=1 00:09:36.037 norandommap=1 00:09:36.037 numjobs=1 00:09:36.037 00:09:36.037 [job0] 00:09:36.037 filename=/dev/nvme0n1 00:09:36.037 [job1] 00:09:36.038 filename=/dev/nvme0n2 00:09:36.038 [job2] 00:09:36.038 filename=/dev/nvme0n3 00:09:36.038 [job3] 00:09:36.038 filename=/dev/nvme0n4 00:09:36.038 Could not set queue depth (nvme0n1) 00:09:36.038 Could not set queue depth (nvme0n2) 00:09:36.038 Could not set queue depth (nvme0n3) 00:09:36.038 Could not set queue depth (nvme0n4) 00:09:36.297 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.297 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.297 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.297 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.297 fio-3.35 00:09:36.297 Starting 4 threads 00:09:38.844 13:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:39.106 13:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:39.106 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=258048, buflen=4096 00:09:39.106 fio: pid=886924, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:39.367 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=5107712, buflen=4096 00:09:39.367 fio: pid=886923, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:39.367 13:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:39.367 13:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:39.629 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11673600, buflen=4096 00:09:39.629 fio: pid=886921, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:39.629 13:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:39.629 13:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:39.629 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=4239360, buflen=4096 00:09:39.629 fio: pid=886922, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:39.629 13:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:39.629 13:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:39.629 00:09:39.629 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=886921: Wed Oct 30 13:56:37 2024 00:09:39.629 read: IOPS=962, BW=3850KiB/s (3942kB/s)(11.1MiB/2961msec) 00:09:39.629 slat (usec): min=6, max=22061, avg=42.28, stdev=485.87 00:09:39.629 clat (usec): min=431, max=41275, avg=982.45, stdev=769.30 00:09:39.629 lat (usec): min=457, max=41301, avg=1024.73, stdev=910.52 00:09:39.629 clat percentiles (usec): 00:09:39.629 | 1.00th=[ 619], 5.00th=[ 799], 10.00th=[ 865], 20.00th=[ 922], 00:09:39.629 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 996], 00:09:39.629 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1074], 00:09:39.629 | 99.00th=[ 1139], 99.50th=[ 1172], 99.90th=[ 2474], 99.95th=[ 7111], 00:09:39.629 | 99.99th=[41157] 00:09:39.629 bw ( KiB/s): min= 3768, max= 4080, per=59.67%, avg=3940.80, stdev=111.80, samples=5 00:09:39.629 iops : min= 942, max= 1020, avg=985.20, stdev=27.95, samples=5 00:09:39.629 lat (usec) : 500=0.11%, 750=2.98%, 1000=61.35% 00:09:39.629 lat (msec) : 2=35.43%, 4=0.04%, 10=0.04%, 50=0.04% 00:09:39.629 cpu : usr=2.50%, sys=3.04%, ctx=2855, majf=0, minf=2 00:09:39.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.629 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.629 issued rwts: total=2851,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.629 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=886922: Wed Oct 30 13:56:37 2024 00:09:39.629 read: IOPS=329, BW=1316KiB/s (1347kB/s)(4140KiB/3147msec) 00:09:39.629 slat (usec): min=6, max=21424, avg=69.11, stdev=856.95 00:09:39.629 clat (usec): min=258, max=42999, avg=2944.24, stdev=8779.94 00:09:39.629 lat (usec): min=285, max=43030, avg=3013.39, stdev=8811.74 00:09:39.629 clat percentiles (usec): 00:09:39.629 | 1.00th=[ 437], 5.00th=[ 693], 10.00th=[ 791], 20.00th=[ 865], 00:09:39.629 | 30.00th=[ 922], 40.00th=[ 947], 50.00th=[ 971], 60.00th=[ 988], 00:09:39.629 | 70.00th=[ 1012], 80.00th=[ 1029], 90.00th=[ 1106], 95.00th=[10159], 00:09:39.629 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[43254], 00:09:39.629 | 99.99th=[43254] 00:09:39.629 bw ( KiB/s): min= 232, max= 2000, per=20.31%, avg=1341.83, stdev=645.85, samples=6 00:09:39.629 iops : min= 58, max= 500, avg=335.33, stdev=161.49, samples=6 00:09:39.629 lat (usec) : 500=1.74%, 750=4.73%, 1000=57.63% 00:09:39.629 lat (msec) : 2=30.69%, 4=0.10%, 20=0.10%, 50=4.92% 00:09:39.629 cpu : usr=1.02%, sys=0.76%, ctx=1040, majf=0, minf=1 00:09:39.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.629 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.629 issued rwts: total=1036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.629 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=886923: Wed Oct 30 13:56:37 2024 00:09:39.629 read: IOPS=450, BW=1799KiB/s (1843kB/s)(4988KiB/2772msec) 00:09:39.629 slat (nsec): min=6796, max=69508, avg=27150.81, stdev=3162.88 00:09:39.629 clat (usec): min=402, max=42080, avg=2171.85, stdev=6894.03 00:09:39.629 lat (usec): min=429, max=42108, avg=2199.00, stdev=6894.03 00:09:39.629 clat percentiles (usec): 00:09:39.629 | 1.00th=[ 717], 5.00th=[ 799], 10.00th=[ 857], 20.00th=[ 914], 00:09:39.629 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 996], 00:09:39.629 | 70.00th=[ 1012], 80.00th=[ 1029], 90.00th=[ 1074], 95.00th=[ 1106], 00:09:39.629 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:39.629 | 99.99th=[42206] 00:09:39.629 bw ( KiB/s): min= 96, max= 2912, per=25.87%, avg=1708.80, stdev=1203.70, samples=5 00:09:39.629 iops : min= 24, max= 728, avg=427.20, stdev=300.92, samples=5 00:09:39.629 lat (usec) : 500=0.08%, 750=1.84%, 1000=59.21% 00:09:39.629 lat (msec) : 2=35.74%, 4=0.08%, 50=2.96% 00:09:39.629 cpu : usr=0.43%, sys=2.17%, ctx=1248, majf=0, minf=2 00:09:39.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.629 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.629 issued rwts: total=1248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.629 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=886924: Wed Oct 30 13:56:37 2024 00:09:39.629 read: IOPS=24, BW=96.6KiB/s (98.9kB/s)(252KiB/2608msec) 00:09:39.629 slat (nsec): min=26547, max=34325, avg=27270.53, stdev=1117.99 00:09:39.629 clat (usec): min=931, max=42128, avg=41022.53, stdev=5149.80 00:09:39.629 lat (usec): min=965, max=42155, avg=41049.81, stdev=5148.90 00:09:39.629 clat percentiles (usec): 00:09:39.629 | 1.00th=[ 930], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:39.629 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:09:39.629 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:39.629 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:39.629 | 99.99th=[42206] 00:09:39.629 bw ( KiB/s): min= 88, max= 104, per=1.45%, avg=96.00, stdev= 5.66, samples=5 00:09:39.629 iops : min= 22, max= 26, avg=24.00, stdev= 1.41, samples=5 00:09:39.629 lat (usec) : 1000=1.56% 00:09:39.629 lat (msec) : 50=96.88% 00:09:39.629 cpu : usr=0.08%, sys=0.08%, ctx=64, majf=0, minf=2 00:09:39.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.629 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.629 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.629 00:09:39.629 Run status group 0 (all jobs): 00:09:39.629 READ: bw=6603KiB/s (6762kB/s), 96.6KiB/s-3850KiB/s (98.9kB/s-3942kB/s), io=20.3MiB (21.3MB), run=2608-3147msec 00:09:39.629 00:09:39.629 Disk stats (read/write): 00:09:39.629 nvme0n1: ios=2761/0, merge=0/0, ticks=2580/0, in_queue=2580, util=93.79% 00:09:39.629 nvme0n2: ios=1032/0, merge=0/0, ticks=3009/0, in_queue=3009, util=94.30% 00:09:39.629 nvme0n3: ios=1127/0, merge=0/0, ticks=2547/0, in_queue=2547, util=95.99% 00:09:39.629 nvme0n4: ios=63/0, merge=0/0, ticks=2585/0, in_queue=2585, util=96.42% 00:09:39.890 13:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:39.890 13:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:40.150 13:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:40.150 13:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:40.150 13:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:40.150 13:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:40.465 13:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:40.465 13:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:40.725 13:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:40.725 13:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 886684 00:09:40.725 13:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:40.725 13:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:40.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.725 13:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:40.725 13:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:40.725 13:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:40.725 13:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.725 13:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:40.725 13:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.725 13:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:40.725 13:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:40.725 13:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:40.725 nvmf hotplug test: fio failed as expected 00:09:40.725 13:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:40.985 rmmod nvme_tcp 00:09:40.985 rmmod nvme_fabrics 00:09:40.985 rmmod nvme_keyring 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 882910 ']' 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 882910 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 882910 ']' 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 882910 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 882910 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 882910' 00:09:40.985 killing process with pid 882910 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 882910 00:09:40.985 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 882910 00:09:41.246 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:41.246 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:41.246 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:41.246 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:41.246 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:41.246 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:41.246 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:41.246 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:41.246 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:41.246 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.246 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.246 13:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.158 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:43.158 00:09:43.158 real 0m29.284s 00:09:43.158 user 2m35.426s 00:09:43.158 sys 0m9.521s 00:09:43.158 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.158 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.158 ************************************ 00:09:43.158 END TEST nvmf_fio_target 00:09:43.158 ************************************ 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:43.421 ************************************ 00:09:43.421 START TEST nvmf_bdevio 00:09:43.421 ************************************ 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:43.421 * Looking for test storage... 00:09:43.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:43.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.421 --rc genhtml_branch_coverage=1 00:09:43.421 --rc genhtml_function_coverage=1 00:09:43.421 --rc genhtml_legend=1 00:09:43.421 --rc geninfo_all_blocks=1 00:09:43.421 --rc geninfo_unexecuted_blocks=1 00:09:43.421 00:09:43.421 ' 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:43.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.421 --rc genhtml_branch_coverage=1 00:09:43.421 --rc genhtml_function_coverage=1 00:09:43.421 --rc genhtml_legend=1 00:09:43.421 --rc geninfo_all_blocks=1 00:09:43.421 --rc geninfo_unexecuted_blocks=1 00:09:43.421 00:09:43.421 ' 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:43.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.421 --rc genhtml_branch_coverage=1 00:09:43.421 --rc genhtml_function_coverage=1 00:09:43.421 --rc genhtml_legend=1 00:09:43.421 --rc geninfo_all_blocks=1 00:09:43.421 --rc geninfo_unexecuted_blocks=1 00:09:43.421 00:09:43.421 ' 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:43.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.421 --rc genhtml_branch_coverage=1 00:09:43.421 --rc genhtml_function_coverage=1 00:09:43.421 --rc genhtml_legend=1 00:09:43.421 --rc geninfo_all_blocks=1 00:09:43.421 --rc geninfo_unexecuted_blocks=1 00:09:43.421 00:09:43.421 ' 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.421 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:43.684 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:51.828 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:51.828 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:51.828 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:51.828 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:51.828 13:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:51.828 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:51.828 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:51.828 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:51.828 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:51.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:09:51.829 00:09:51.829 --- 10.0.0.2 ping statistics --- 00:09:51.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.829 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:51.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:09:51.829 00:09:51.829 --- 10.0.0.1 ping statistics --- 00:09:51.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.829 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=891969 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 891969 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 891969 ']' 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.829 13:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:51.829 [2024-10-30 13:56:49.281111] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:09:51.829 [2024-10-30 13:56:49.281177] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.829 [2024-10-30 13:56:49.384723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.829 [2024-10-30 13:56:49.437482] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.829 [2024-10-30 13:56:49.437534] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.829 [2024-10-30 13:56:49.437543] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.829 [2024-10-30 13:56:49.437550] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.829 [2024-10-30 13:56:49.437556] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.829 [2024-10-30 13:56:49.439629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:51.829 [2024-10-30 13:56:49.439836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:51.829 [2024-10-30 13:56:49.439993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:51.829 [2024-10-30 13:56:49.439995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.829 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.829 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:51.829 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:51.829 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:51.829 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:52.089 [2024-10-30 13:56:50.167532] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:52.089 Malloc0 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:52.089 [2024-10-30 13:56:50.243675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:52.089 { 00:09:52.089 "params": { 00:09:52.089 "name": "Nvme$subsystem", 00:09:52.089 "trtype": "$TEST_TRANSPORT", 00:09:52.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:52.089 "adrfam": "ipv4", 00:09:52.089 "trsvcid": "$NVMF_PORT", 00:09:52.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:52.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:52.089 "hdgst": ${hdgst:-false}, 00:09:52.089 "ddgst": ${ddgst:-false} 00:09:52.089 }, 00:09:52.089 "method": "bdev_nvme_attach_controller" 00:09:52.089 } 00:09:52.089 EOF 00:09:52.089 )") 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:52.089 13:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:52.089 "params": { 00:09:52.089 "name": "Nvme1", 00:09:52.089 "trtype": "tcp", 00:09:52.089 "traddr": "10.0.0.2", 00:09:52.089 "adrfam": "ipv4", 00:09:52.089 "trsvcid": "4420", 00:09:52.089 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:52.089 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:52.089 "hdgst": false, 00:09:52.089 "ddgst": false 00:09:52.089 }, 00:09:52.089 "method": "bdev_nvme_attach_controller" 00:09:52.089 }' 00:09:52.089 [2024-10-30 13:56:50.299703] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:09:52.089 [2024-10-30 13:56:50.299779] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid892314 ] 00:09:52.349 [2024-10-30 13:56:50.392056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:52.349 [2024-10-30 13:56:50.448853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.349 [2024-10-30 13:56:50.449021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:52.350 [2024-10-30 13:56:50.449021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.610 I/O targets: 00:09:52.610 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:52.610 00:09:52.610 00:09:52.610 CUnit - A unit testing framework for C - Version 2.1-3 00:09:52.610 http://cunit.sourceforge.net/ 00:09:52.610 00:09:52.610 00:09:52.610 Suite: bdevio tests on: Nvme1n1 00:09:52.610 Test: blockdev write read block ...passed 00:09:52.610 Test: blockdev write zeroes read block ...passed 00:09:52.610 Test: blockdev write zeroes read no split ...passed 00:09:52.610 Test: blockdev write zeroes read split ...passed 00:09:52.871 Test: blockdev write zeroes read split partial ...passed 00:09:52.871 Test: blockdev reset ...[2024-10-30 13:56:50.945718] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:52.871 [2024-10-30 13:56:50.945825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0fb30 (9): Bad file descriptor 00:09:52.871 [2024-10-30 13:56:51.044587] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:52.871 passed 00:09:52.871 Test: blockdev write read 8 blocks ...passed 00:09:52.871 Test: blockdev write read size > 128k ...passed 00:09:52.871 Test: blockdev write read invalid size ...passed 00:09:53.132 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:53.132 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:53.132 Test: blockdev write read max offset ...passed 00:09:53.132 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:53.132 Test: blockdev writev readv 8 blocks ...passed 00:09:53.132 Test: blockdev writev readv 30 x 1block ...passed 00:09:53.132 Test: blockdev writev readv block ...passed 00:09:53.132 Test: blockdev writev readv size > 128k ...passed 00:09:53.132 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:53.133 Test: blockdev comparev and writev ...[2024-10-30 13:56:51.311843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:53.133 [2024-10-30 13:56:51.311893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:53.133 [2024-10-30 13:56:51.311911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:53.133 [2024-10-30 13:56:51.311920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:53.133 [2024-10-30 13:56:51.312465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:53.133 [2024-10-30 13:56:51.312481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:53.133 [2024-10-30 13:56:51.312497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:53.133 [2024-10-30 13:56:51.312506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:53.133 [2024-10-30 13:56:51.313026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:53.133 [2024-10-30 13:56:51.313042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:53.133 [2024-10-30 13:56:51.313062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:53.133 [2024-10-30 13:56:51.313074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:53.133 [2024-10-30 13:56:51.313608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:53.133 [2024-10-30 13:56:51.313622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:53.133 [2024-10-30 13:56:51.313636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:53.133 [2024-10-30 13:56:51.313645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:53.133 passed 00:09:53.133 Test: blockdev nvme passthru rw ...passed 00:09:53.133 Test: blockdev nvme passthru vendor specific ...[2024-10-30 13:56:51.397673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:53.133 [2024-10-30 13:56:51.397694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:53.133 [2024-10-30 13:56:51.398064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:53.133 [2024-10-30 13:56:51.398077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:53.133 [2024-10-30 13:56:51.398454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:53.133 [2024-10-30 13:56:51.398466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:53.133 [2024-10-30 13:56:51.398842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:53.133 [2024-10-30 13:56:51.398854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:53.133 passed 00:09:53.133 Test: blockdev nvme admin passthru ...passed 00:09:53.394 Test: blockdev copy ...passed 00:09:53.394 00:09:53.394 Run Summary: Type Total Ran Passed Failed Inactive 00:09:53.394 suites 1 1 n/a 0 0 00:09:53.394 tests 23 23 23 0 0 00:09:53.394 asserts 152 152 152 0 n/a 00:09:53.394 00:09:53.394 Elapsed time = 1.371 seconds 00:09:53.394 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:53.394 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.394 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.394 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.394 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:53.394 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:53.394 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:53.394 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:53.394 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:53.394 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:53.394 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:53.394 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:53.394 rmmod nvme_tcp 00:09:53.394 rmmod nvme_fabrics 00:09:53.394 rmmod nvme_keyring 00:09:53.394 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:53.394 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:53.394 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:53.394 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 891969 ']' 00:09:53.394 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 891969 00:09:53.394 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 891969 ']' 00:09:53.394 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 891969 00:09:53.394 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:53.394 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.394 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 891969 00:09:53.654 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:53.654 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:53.654 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 891969' 00:09:53.654 killing process with pid 891969 00:09:53.654 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 891969 00:09:53.654 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 891969 00:09:53.654 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:53.654 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:53.654 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:53.654 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:53.654 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:53.654 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:53.654 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:53.654 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:53.654 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:53.654 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.654 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.654 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.200 13:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:56.200 00:09:56.200 real 0m12.388s 00:09:56.200 user 0m14.447s 00:09:56.200 sys 0m6.220s 00:09:56.200 13:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.200 13:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.200 ************************************ 00:09:56.200 END TEST nvmf_bdevio 00:09:56.200 ************************************ 00:09:56.200 13:56:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:56.200 00:09:56.200 real 5m4.938s 00:09:56.200 user 11m46.557s 00:09:56.200 sys 1m51.053s 00:09:56.200 13:56:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.200 13:56:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:56.200 ************************************ 00:09:56.200 END TEST nvmf_target_core 00:09:56.200 ************************************ 00:09:56.200 13:56:53 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:56.200 13:56:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:56.200 13:56:53 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.200 13:56:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:56.200 ************************************ 00:09:56.200 START TEST nvmf_target_extra 00:09:56.200 ************************************ 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:56.201 * Looking for test storage... 00:09:56.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:56.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.201 --rc genhtml_branch_coverage=1 00:09:56.201 --rc genhtml_function_coverage=1 00:09:56.201 --rc genhtml_legend=1 00:09:56.201 --rc geninfo_all_blocks=1 00:09:56.201 --rc geninfo_unexecuted_blocks=1 00:09:56.201 00:09:56.201 ' 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:56.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.201 --rc genhtml_branch_coverage=1 00:09:56.201 --rc genhtml_function_coverage=1 00:09:56.201 --rc genhtml_legend=1 00:09:56.201 --rc geninfo_all_blocks=1 00:09:56.201 --rc geninfo_unexecuted_blocks=1 00:09:56.201 00:09:56.201 ' 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:56.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.201 --rc genhtml_branch_coverage=1 00:09:56.201 --rc genhtml_function_coverage=1 00:09:56.201 --rc genhtml_legend=1 00:09:56.201 --rc geninfo_all_blocks=1 00:09:56.201 --rc geninfo_unexecuted_blocks=1 00:09:56.201 00:09:56.201 ' 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:56.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.201 --rc genhtml_branch_coverage=1 00:09:56.201 --rc genhtml_function_coverage=1 00:09:56.201 --rc genhtml_legend=1 00:09:56.201 --rc geninfo_all_blocks=1 00:09:56.201 --rc geninfo_unexecuted_blocks=1 00:09:56.201 00:09:56.201 ' 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.201 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:56.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:56.202 ************************************ 00:09:56.202 START TEST nvmf_example 00:09:56.202 ************************************ 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:56.202 * Looking for test storage... 00:09:56.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.202 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:56.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.464 --rc genhtml_branch_coverage=1 00:09:56.464 --rc genhtml_function_coverage=1 00:09:56.464 --rc genhtml_legend=1 00:09:56.464 --rc geninfo_all_blocks=1 00:09:56.464 --rc geninfo_unexecuted_blocks=1 00:09:56.464 00:09:56.464 ' 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:56.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.464 --rc genhtml_branch_coverage=1 00:09:56.464 --rc genhtml_function_coverage=1 00:09:56.464 --rc genhtml_legend=1 00:09:56.464 --rc geninfo_all_blocks=1 00:09:56.464 --rc geninfo_unexecuted_blocks=1 00:09:56.464 00:09:56.464 ' 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:56.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.464 --rc genhtml_branch_coverage=1 00:09:56.464 --rc genhtml_function_coverage=1 00:09:56.464 --rc genhtml_legend=1 00:09:56.464 --rc geninfo_all_blocks=1 00:09:56.464 --rc geninfo_unexecuted_blocks=1 00:09:56.464 00:09:56.464 ' 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:56.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.464 --rc genhtml_branch_coverage=1 00:09:56.464 --rc genhtml_function_coverage=1 00:09:56.464 --rc genhtml_legend=1 00:09:56.464 --rc geninfo_all_blocks=1 00:09:56.464 --rc geninfo_unexecuted_blocks=1 00:09:56.464 00:09:56.464 ' 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:56.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:56.464 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:04.608 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:04.608 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:04.608 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:04.608 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:04.609 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:04.609 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:04.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:04.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:10:04.609 00:10:04.609 --- 10.0.0.2 ping statistics --- 00:10:04.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.609 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:04.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:04.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:10:04.609 00:10:04.609 --- 10.0.0.1 ping statistics --- 00:10:04.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.609 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=897048 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 897048 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 897048 ']' 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.609 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.870 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.871 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:04.871 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:04.871 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:04.871 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.871 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:04.871 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.871 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.871 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.871 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:04.871 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.871 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.871 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.871 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:04.871 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:04.871 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.871 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.871 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.871 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:04.871 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:04.871 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.871 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.871 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.871 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.871 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.871 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.871 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.871 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:04.871 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:17.107 Initializing NVMe Controllers 00:10:17.107 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:17.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:17.107 Initialization complete. Launching workers. 00:10:17.107 ======================================================== 00:10:17.107 Latency(us) 00:10:17.107 Device Information : IOPS MiB/s Average min max 00:10:17.107 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18658.90 72.89 3429.48 632.63 15483.64 00:10:17.107 ======================================================== 00:10:17.107 Total : 18658.90 72.89 3429.48 632.63 15483.64 00:10:17.107 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:17.107 rmmod nvme_tcp 00:10:17.107 rmmod nvme_fabrics 00:10:17.107 rmmod nvme_keyring 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 897048 ']' 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 897048 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 897048 ']' 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 897048 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 897048 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 897048' 00:10:17.107 killing process with pid 897048 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 897048 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 897048 00:10:17.107 nvmf threads initialize successfully 00:10:17.107 bdev subsystem init successfully 00:10:17.107 created a nvmf target service 00:10:17.107 create targets's poll groups done 00:10:17.107 all subsystems of target started 00:10:17.107 nvmf target is running 00:10:17.107 all subsystems of target stopped 00:10:17.107 destroy targets's poll groups done 00:10:17.107 destroyed the nvmf target service 00:10:17.107 bdev subsystem finish successfully 00:10:17.107 nvmf threads destroy successfully 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.107 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.368 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:17.368 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:17.368 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:17.368 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.368 00:10:17.368 real 0m21.353s 00:10:17.368 user 0m46.360s 00:10:17.368 sys 0m7.022s 00:10:17.368 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.368 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.368 ************************************ 00:10:17.368 END TEST nvmf_example 00:10:17.368 ************************************ 00:10:17.629 13:57:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:17.629 13:57:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.629 13:57:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.630 13:57:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:17.630 ************************************ 00:10:17.630 START TEST nvmf_filesystem 00:10:17.630 ************************************ 00:10:17.630 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:17.630 * Looking for test storage... 00:10:17.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.630 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:17.630 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:17.630 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:17.630 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:17.630 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.630 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:17.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.897 --rc genhtml_branch_coverage=1 00:10:17.897 --rc genhtml_function_coverage=1 00:10:17.897 --rc genhtml_legend=1 00:10:17.897 --rc geninfo_all_blocks=1 00:10:17.897 --rc geninfo_unexecuted_blocks=1 00:10:17.897 00:10:17.897 ' 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:17.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.897 --rc genhtml_branch_coverage=1 00:10:17.897 --rc genhtml_function_coverage=1 00:10:17.897 --rc genhtml_legend=1 00:10:17.897 --rc geninfo_all_blocks=1 00:10:17.897 --rc geninfo_unexecuted_blocks=1 00:10:17.897 00:10:17.897 ' 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:17.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.897 --rc genhtml_branch_coverage=1 00:10:17.897 --rc genhtml_function_coverage=1 00:10:17.897 --rc genhtml_legend=1 00:10:17.897 --rc geninfo_all_blocks=1 00:10:17.897 --rc geninfo_unexecuted_blocks=1 00:10:17.897 00:10:17.897 ' 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:17.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.897 --rc genhtml_branch_coverage=1 00:10:17.897 --rc genhtml_function_coverage=1 00:10:17.897 --rc genhtml_legend=1 00:10:17.897 --rc geninfo_all_blocks=1 00:10:17.897 --rc geninfo_unexecuted_blocks=1 00:10:17.897 00:10:17.897 ' 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:17.897 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:17.898 #define SPDK_CONFIG_H 00:10:17.898 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:17.898 #define SPDK_CONFIG_APPS 1 00:10:17.898 #define SPDK_CONFIG_ARCH native 00:10:17.898 #undef SPDK_CONFIG_ASAN 00:10:17.898 #undef SPDK_CONFIG_AVAHI 00:10:17.898 #undef SPDK_CONFIG_CET 00:10:17.898 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:17.898 #define SPDK_CONFIG_COVERAGE 1 00:10:17.898 #define SPDK_CONFIG_CROSS_PREFIX 00:10:17.898 #undef SPDK_CONFIG_CRYPTO 00:10:17.898 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:17.898 #undef SPDK_CONFIG_CUSTOMOCF 00:10:17.898 #undef SPDK_CONFIG_DAOS 00:10:17.898 #define SPDK_CONFIG_DAOS_DIR 00:10:17.898 #define SPDK_CONFIG_DEBUG 1 00:10:17.898 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:17.898 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:17.898 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:17.898 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:17.898 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:17.898 #undef SPDK_CONFIG_DPDK_UADK 00:10:17.898 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:17.898 #define SPDK_CONFIG_EXAMPLES 1 00:10:17.898 #undef SPDK_CONFIG_FC 00:10:17.898 #define SPDK_CONFIG_FC_PATH 00:10:17.898 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:17.898 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:17.898 #define SPDK_CONFIG_FSDEV 1 00:10:17.898 #undef SPDK_CONFIG_FUSE 00:10:17.898 #undef SPDK_CONFIG_FUZZER 00:10:17.898 #define SPDK_CONFIG_FUZZER_LIB 00:10:17.898 #undef SPDK_CONFIG_GOLANG 00:10:17.898 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:17.898 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:17.898 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:17.898 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:17.898 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:17.898 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:17.898 #undef SPDK_CONFIG_HAVE_LZ4 00:10:17.898 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:17.898 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:17.898 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:17.898 #define SPDK_CONFIG_IDXD 1 00:10:17.898 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:17.898 #undef SPDK_CONFIG_IPSEC_MB 00:10:17.898 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:17.898 #define SPDK_CONFIG_ISAL 1 00:10:17.898 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:17.898 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:17.898 #define SPDK_CONFIG_LIBDIR 00:10:17.898 #undef SPDK_CONFIG_LTO 00:10:17.898 #define SPDK_CONFIG_MAX_LCORES 128 00:10:17.898 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:17.898 #define SPDK_CONFIG_NVME_CUSE 1 00:10:17.898 #undef SPDK_CONFIG_OCF 00:10:17.898 #define SPDK_CONFIG_OCF_PATH 00:10:17.898 #define SPDK_CONFIG_OPENSSL_PATH 00:10:17.898 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:17.898 #define SPDK_CONFIG_PGO_DIR 00:10:17.898 #undef SPDK_CONFIG_PGO_USE 00:10:17.898 #define SPDK_CONFIG_PREFIX /usr/local 00:10:17.898 #undef SPDK_CONFIG_RAID5F 00:10:17.898 #undef SPDK_CONFIG_RBD 00:10:17.898 #define SPDK_CONFIG_RDMA 1 00:10:17.898 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:17.898 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:17.898 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:17.898 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:17.898 #define SPDK_CONFIG_SHARED 1 00:10:17.898 #undef SPDK_CONFIG_SMA 00:10:17.898 #define SPDK_CONFIG_TESTS 1 00:10:17.898 #undef SPDK_CONFIG_TSAN 00:10:17.898 #define SPDK_CONFIG_UBLK 1 00:10:17.898 #define SPDK_CONFIG_UBSAN 1 00:10:17.898 #undef SPDK_CONFIG_UNIT_TESTS 00:10:17.898 #undef SPDK_CONFIG_URING 00:10:17.898 #define SPDK_CONFIG_URING_PATH 00:10:17.898 #undef SPDK_CONFIG_URING_ZNS 00:10:17.898 #undef SPDK_CONFIG_USDT 00:10:17.898 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:17.898 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:17.898 #define SPDK_CONFIG_VFIO_USER 1 00:10:17.898 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:17.898 #define SPDK_CONFIG_VHOST 1 00:10:17.898 #define SPDK_CONFIG_VIRTIO 1 00:10:17.898 #undef SPDK_CONFIG_VTUNE 00:10:17.898 #define SPDK_CONFIG_VTUNE_DIR 00:10:17.898 #define SPDK_CONFIG_WERROR 1 00:10:17.898 #define SPDK_CONFIG_WPDK_DIR 00:10:17.898 #undef SPDK_CONFIG_XNVME 00:10:17.898 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.898 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.899 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.899 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.899 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.899 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.899 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.899 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:17.899 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.899 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:17.899 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:17.899 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_AE4DMA 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_BLOBFS 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VHOST_INIT 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_LVOL 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:17.899 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 0 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_ASAN 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 1 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_UBSAN 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_RUN_NON_ROOT 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_CRYPTO 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_FTL 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OCF 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_VMD 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 0 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_OPAL 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_NATIVE_DPDK 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : true 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_AUTOTEST_X 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_URING 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USDT 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_USE_IGB_UIO 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCHEDULER 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 0 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_SCANBUILD 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : e810 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_NVMF_NICS 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_SMA 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_DAOS 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_XNVME 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_DSA 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_ACCEL_IAA 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_FUZZER_TARGET 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_TEST_NVMF_MDNS 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_SETUP 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:17.900 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 900292 ]] 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 900292 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.QdKA2h 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.QdKA2h/tests/target /tmp/spdk.QdKA2h 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=607141888 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=4677287936 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=123504062464 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356541952 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5852479488 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64668237824 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678268928 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847955456 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871310848 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23355392 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:10:17.901 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64678072320 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678273024 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=200704 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935639040 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935651328 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:17.902 * Looking for test storage... 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=123504062464 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8067072000 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:17.902 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:18.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.164 --rc genhtml_branch_coverage=1 00:10:18.164 --rc genhtml_function_coverage=1 00:10:18.164 --rc genhtml_legend=1 00:10:18.164 --rc geninfo_all_blocks=1 00:10:18.164 --rc geninfo_unexecuted_blocks=1 00:10:18.164 00:10:18.164 ' 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:18.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.164 --rc genhtml_branch_coverage=1 00:10:18.164 --rc genhtml_function_coverage=1 00:10:18.164 --rc genhtml_legend=1 00:10:18.164 --rc geninfo_all_blocks=1 00:10:18.164 --rc geninfo_unexecuted_blocks=1 00:10:18.164 00:10:18.164 ' 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:18.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.164 --rc genhtml_branch_coverage=1 00:10:18.164 --rc genhtml_function_coverage=1 00:10:18.164 --rc genhtml_legend=1 00:10:18.164 --rc geninfo_all_blocks=1 00:10:18.164 --rc geninfo_unexecuted_blocks=1 00:10:18.164 00:10:18.164 ' 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:18.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.164 --rc genhtml_branch_coverage=1 00:10:18.164 --rc genhtml_function_coverage=1 00:10:18.164 --rc genhtml_legend=1 00:10:18.164 --rc geninfo_all_blocks=1 00:10:18.164 --rc geninfo_unexecuted_blocks=1 00:10:18.164 00:10:18.164 ' 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.164 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:18.165 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:18.165 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:18.165 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.165 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.165 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.165 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:18.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:18.165 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:18.165 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:18.165 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:18.165 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:18.165 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:18.165 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:18.165 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:18.165 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.165 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:18.165 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:18.165 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:18.165 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.165 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.165 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.165 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:18.165 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:18.165 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:18.165 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:26.310 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:26.310 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:26.310 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:26.310 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:26.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:10:26.310 00:10:26.310 --- 10.0.0.2 ping statistics --- 00:10:26.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.310 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:10:26.310 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:26.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:10:26.310 00:10:26.310 --- 10.0.0.1 ping statistics --- 00:10:26.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.311 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:26.311 ************************************ 00:10:26.311 START TEST nvmf_filesystem_no_in_capsule 00:10:26.311 ************************************ 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=904043 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 904043 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 904043 ']' 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:26.311 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.311 [2024-10-30 13:57:23.917897] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:10:26.311 [2024-10-30 13:57:23.917958] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.311 [2024-10-30 13:57:24.015727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:26.311 [2024-10-30 13:57:24.069631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.311 [2024-10-30 13:57:24.069677] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.311 [2024-10-30 13:57:24.069689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.311 [2024-10-30 13:57:24.069699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.311 [2024-10-30 13:57:24.069707] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.311 [2024-10-30 13:57:24.071792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.311 [2024-10-30 13:57:24.071878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.311 [2024-10-30 13:57:24.072038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.311 [2024-10-30 13:57:24.072038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:26.572 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.572 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:26.572 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:26.572 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:26.572 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.572 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:26.572 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:26.572 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:26.572 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.572 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.572 [2024-10-30 13:57:24.797006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:26.572 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.572 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:26.572 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.572 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.833 Malloc1 00:10:26.833 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.833 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:26.833 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.833 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.833 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.833 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:26.833 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.833 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.833 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.833 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:26.833 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.833 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.833 [2024-10-30 13:57:24.954440] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.833 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.833 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:26.833 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:26.833 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:26.833 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:26.833 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:26.833 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:26.833 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.833 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.833 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.833 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:26.833 { 00:10:26.833 "name": "Malloc1", 00:10:26.833 "aliases": [ 00:10:26.833 "535cb674-67fa-411c-a8db-ccf7b840736f" 00:10:26.833 ], 00:10:26.833 "product_name": "Malloc disk", 00:10:26.833 "block_size": 512, 00:10:26.833 "num_blocks": 1048576, 00:10:26.833 "uuid": "535cb674-67fa-411c-a8db-ccf7b840736f", 00:10:26.833 "assigned_rate_limits": { 00:10:26.833 "rw_ios_per_sec": 0, 00:10:26.833 "rw_mbytes_per_sec": 0, 00:10:26.833 "r_mbytes_per_sec": 0, 00:10:26.833 "w_mbytes_per_sec": 0 00:10:26.833 }, 00:10:26.833 "claimed": true, 00:10:26.833 "claim_type": "exclusive_write", 00:10:26.833 "zoned": false, 00:10:26.833 "supported_io_types": { 00:10:26.833 "read": true, 00:10:26.833 "write": true, 00:10:26.833 "unmap": true, 00:10:26.833 "flush": true, 00:10:26.833 "reset": true, 00:10:26.833 "nvme_admin": false, 00:10:26.833 "nvme_io": false, 00:10:26.833 "nvme_io_md": false, 00:10:26.833 "write_zeroes": true, 00:10:26.833 "zcopy": true, 00:10:26.833 "get_zone_info": false, 00:10:26.833 "zone_management": false, 00:10:26.833 "zone_append": false, 00:10:26.833 "compare": false, 00:10:26.833 "compare_and_write": false, 00:10:26.833 "abort": true, 00:10:26.833 "seek_hole": false, 00:10:26.833 "seek_data": false, 00:10:26.833 "copy": true, 00:10:26.833 "nvme_iov_md": false 00:10:26.833 }, 00:10:26.833 "memory_domains": [ 00:10:26.833 { 00:10:26.833 "dma_device_id": "system", 00:10:26.833 "dma_device_type": 1 00:10:26.833 }, 00:10:26.833 { 00:10:26.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.833 "dma_device_type": 2 00:10:26.833 } 00:10:26.833 ], 00:10:26.833 "driver_specific": {} 00:10:26.833 } 00:10:26.833 ]' 00:10:26.833 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:26.833 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:26.833 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:26.833 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:26.833 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:26.833 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:26.833 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:26.833 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:28.745 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:28.746 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:28.746 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:28.746 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:28.746 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:30.660 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:30.660 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:30.660 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:30.660 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:30.661 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:30.661 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:30.661 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:30.661 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:30.661 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:30.661 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:30.661 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:30.661 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:30.661 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:30.661 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:30.661 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:30.661 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:30.661 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:30.922 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:31.182 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:32.141 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:32.141 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:32.141 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:32.141 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.141 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.402 ************************************ 00:10:32.402 START TEST filesystem_ext4 00:10:32.402 ************************************ 00:10:32.402 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:32.402 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:32.402 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:32.402 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:32.403 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:32.403 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:32.403 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:32.403 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:32.403 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:32.403 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:32.403 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:32.403 mke2fs 1.47.0 (5-Feb-2023) 00:10:32.403 Discarding device blocks: 0/522240 done 00:10:32.403 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:32.403 Filesystem UUID: 13f53730-2be0-412d-9e99-94ec2acf8df0 00:10:32.403 Superblock backups stored on blocks: 00:10:32.403 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:32.403 00:10:32.403 Allocating group tables: 0/64 done 00:10:32.403 Writing inode tables: 0/64 done 00:10:33.785 Creating journal (8192 blocks): done 00:10:35.249 Writing superblocks and filesystem accounting information: 0/64 done 00:10:35.249 00:10:35.249 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:35.249 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:41.835 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:41.835 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:41.835 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:41.835 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:41.835 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:41.835 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:41.835 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 904043 00:10:41.835 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:41.835 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:41.836 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:41.836 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:41.836 00:10:41.836 real 0m8.833s 00:10:41.836 user 0m0.034s 00:10:41.836 sys 0m0.074s 00:10:41.836 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.836 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:41.836 ************************************ 00:10:41.836 END TEST filesystem_ext4 00:10:41.836 ************************************ 00:10:41.836 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:41.836 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:41.836 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.836 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.836 ************************************ 00:10:41.836 START TEST filesystem_btrfs 00:10:41.836 ************************************ 00:10:41.836 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:41.836 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:41.836 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:41.836 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:41.836 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:41.836 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:41.836 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:41.836 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:41.836 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:41.836 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:41.836 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:41.836 btrfs-progs v6.8.1 00:10:41.836 See https://btrfs.readthedocs.io for more information. 00:10:41.836 00:10:41.836 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:41.836 NOTE: several default settings have changed in version 5.15, please make sure 00:10:41.836 this does not affect your deployments: 00:10:41.836 - DUP for metadata (-m dup) 00:10:41.836 - enabled no-holes (-O no-holes) 00:10:41.836 - enabled free-space-tree (-R free-space-tree) 00:10:41.836 00:10:41.836 Label: (null) 00:10:41.836 UUID: 750dd047-1fec-4759-b443-f097b1c8e5b8 00:10:41.836 Node size: 16384 00:10:41.836 Sector size: 4096 (CPU page size: 4096) 00:10:41.836 Filesystem size: 510.00MiB 00:10:41.836 Block group profiles: 00:10:41.836 Data: single 8.00MiB 00:10:41.836 Metadata: DUP 32.00MiB 00:10:41.836 System: DUP 8.00MiB 00:10:41.836 SSD detected: yes 00:10:41.836 Zoned device: no 00:10:41.836 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:41.836 Checksum: crc32c 00:10:41.836 Number of devices: 1 00:10:41.836 Devices: 00:10:41.836 ID SIZE PATH 00:10:41.836 1 510.00MiB /dev/nvme0n1p1 00:10:41.836 00:10:41.836 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:41.836 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 904043 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:42.408 00:10:42.408 real 0m1.240s 00:10:42.408 user 0m0.035s 00:10:42.408 sys 0m0.111s 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:42.408 ************************************ 00:10:42.408 END TEST filesystem_btrfs 00:10:42.408 ************************************ 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.408 ************************************ 00:10:42.408 START TEST filesystem_xfs 00:10:42.408 ************************************ 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:42.408 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:42.669 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:42.669 = sectsz=512 attr=2, projid32bit=1 00:10:42.669 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:42.669 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:42.669 data = bsize=4096 blocks=130560, imaxpct=25 00:10:42.669 = sunit=0 swidth=0 blks 00:10:42.669 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:42.669 log =internal log bsize=4096 blocks=16384, version=2 00:10:42.669 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:42.669 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:43.613 Discarding blocks...Done. 00:10:43.613 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:43.613 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:45.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:45.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:45.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:45.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:45.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:45.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:45.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 904043 00:10:45.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:45.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:45.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:45.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:45.530 00:10:45.530 real 0m2.831s 00:10:45.530 user 0m0.026s 00:10:45.530 sys 0m0.077s 00:10:45.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:45.530 ************************************ 00:10:45.530 END TEST filesystem_xfs 00:10:45.530 ************************************ 00:10:45.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:45.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:45.791 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:46.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.053 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:46.053 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:46.053 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:46.053 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.053 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:46.053 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.053 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:46.053 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:46.053 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.053 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.053 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.053 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:46.053 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 904043 00:10:46.053 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 904043 ']' 00:10:46.053 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 904043 00:10:46.053 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:46.053 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.053 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 904043 00:10:46.053 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.053 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.053 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 904043' 00:10:46.053 killing process with pid 904043 00:10:46.053 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 904043 00:10:46.053 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 904043 00:10:46.315 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:46.315 00:10:46.315 real 0m20.569s 00:10:46.315 user 1m21.311s 00:10:46.315 sys 0m1.449s 00:10:46.315 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.315 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.315 ************************************ 00:10:46.315 END TEST nvmf_filesystem_no_in_capsule 00:10:46.315 ************************************ 00:10:46.315 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:46.315 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:46.315 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.315 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:46.315 ************************************ 00:10:46.315 START TEST nvmf_filesystem_in_capsule 00:10:46.315 ************************************ 00:10:46.315 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:46.315 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:46.315 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:46.315 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:46.315 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:46.315 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.315 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=908300 00:10:46.315 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 908300 00:10:46.315 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:46.315 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 908300 ']' 00:10:46.315 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.315 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.315 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.315 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.315 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.315 [2024-10-30 13:57:44.565042] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:10:46.315 [2024-10-30 13:57:44.565092] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.576 [2024-10-30 13:57:44.656642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.576 [2024-10-30 13:57:44.688626] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.576 [2024-10-30 13:57:44.688653] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.576 [2024-10-30 13:57:44.688661] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.576 [2024-10-30 13:57:44.688668] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.576 [2024-10-30 13:57:44.688673] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.576 [2024-10-30 13:57:44.690103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.576 [2024-10-30 13:57:44.690253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.576 [2024-10-30 13:57:44.690403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.576 [2024-10-30 13:57:44.690404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.147 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.147 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:47.147 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:47.148 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:47.148 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.148 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.148 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:47.148 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:47.148 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.148 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.148 [2024-10-30 13:57:45.415926] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:47.148 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.148 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:47.148 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.148 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.408 Malloc1 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.408 [2024-10-30 13:57:45.539184] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:47.408 { 00:10:47.408 "name": "Malloc1", 00:10:47.408 "aliases": [ 00:10:47.408 "8ffb84d0-2e47-4a0e-937e-1c111c6a532a" 00:10:47.408 ], 00:10:47.408 "product_name": "Malloc disk", 00:10:47.408 "block_size": 512, 00:10:47.408 "num_blocks": 1048576, 00:10:47.408 "uuid": "8ffb84d0-2e47-4a0e-937e-1c111c6a532a", 00:10:47.408 "assigned_rate_limits": { 00:10:47.408 "rw_ios_per_sec": 0, 00:10:47.408 "rw_mbytes_per_sec": 0, 00:10:47.408 "r_mbytes_per_sec": 0, 00:10:47.408 "w_mbytes_per_sec": 0 00:10:47.408 }, 00:10:47.408 "claimed": true, 00:10:47.408 "claim_type": "exclusive_write", 00:10:47.408 "zoned": false, 00:10:47.408 "supported_io_types": { 00:10:47.408 "read": true, 00:10:47.408 "write": true, 00:10:47.408 "unmap": true, 00:10:47.408 "flush": true, 00:10:47.408 "reset": true, 00:10:47.408 "nvme_admin": false, 00:10:47.408 "nvme_io": false, 00:10:47.408 "nvme_io_md": false, 00:10:47.408 "write_zeroes": true, 00:10:47.408 "zcopy": true, 00:10:47.408 "get_zone_info": false, 00:10:47.408 "zone_management": false, 00:10:47.408 "zone_append": false, 00:10:47.408 "compare": false, 00:10:47.408 "compare_and_write": false, 00:10:47.408 "abort": true, 00:10:47.408 "seek_hole": false, 00:10:47.408 "seek_data": false, 00:10:47.408 "copy": true, 00:10:47.408 "nvme_iov_md": false 00:10:47.408 }, 00:10:47.408 "memory_domains": [ 00:10:47.408 { 00:10:47.408 "dma_device_id": "system", 00:10:47.408 "dma_device_type": 1 00:10:47.408 }, 00:10:47.408 { 00:10:47.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.408 "dma_device_type": 2 00:10:47.408 } 00:10:47.408 ], 00:10:47.408 "driver_specific": {} 00:10:47.408 } 00:10:47.408 ]' 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:47.408 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:47.409 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:47.409 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:49.319 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:49.319 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:49.319 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:49.319 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:49.319 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:51.230 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:51.230 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:51.230 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:51.230 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:51.230 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:51.230 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:51.230 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:51.230 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:51.230 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:51.230 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:51.230 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:51.230 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:51.230 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:51.230 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:51.230 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:51.230 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:51.230 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:51.494 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:52.069 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:53.452 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:53.452 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:53.452 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:53.452 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.452 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.452 ************************************ 00:10:53.452 START TEST filesystem_in_capsule_ext4 00:10:53.452 ************************************ 00:10:53.452 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:53.452 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:53.452 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:53.452 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:53.452 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:53.452 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:53.452 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:53.452 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:53.452 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:53.452 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:53.452 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:53.452 mke2fs 1.47.0 (5-Feb-2023) 00:10:53.452 Discarding device blocks: 0/522240 done 00:10:53.452 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:53.452 Filesystem UUID: ae21685a-a5ea-40cf-a6e2-e86c9306de71 00:10:53.452 Superblock backups stored on blocks: 00:10:53.452 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:53.452 00:10:53.452 Allocating group tables: 0/64 done 00:10:53.452 Writing inode tables: 0/64 done 00:10:56.749 Creating journal (8192 blocks): done 00:10:56.749 Writing superblocks and filesystem accounting information: 0/64 done 00:10:56.749 00:10:56.749 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:56.749 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:03.332 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:03.332 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:03.332 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:03.332 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:03.332 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:03.332 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:03.332 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 908300 00:11:03.332 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:03.332 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:03.332 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:03.332 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:03.332 00:11:03.332 real 0m9.361s 00:11:03.332 user 0m0.030s 00:11:03.332 sys 0m0.077s 00:11:03.332 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.332 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:03.332 ************************************ 00:11:03.332 END TEST filesystem_in_capsule_ext4 00:11:03.332 ************************************ 00:11:03.332 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:03.332 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:03.332 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.332 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.332 ************************************ 00:11:03.332 START TEST filesystem_in_capsule_btrfs 00:11:03.332 ************************************ 00:11:03.332 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:03.332 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:03.332 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:03.332 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:03.333 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:03.333 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:03.333 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:03.333 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:03.333 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:03.333 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:03.333 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:03.333 btrfs-progs v6.8.1 00:11:03.333 See https://btrfs.readthedocs.io for more information. 00:11:03.333 00:11:03.333 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:03.333 NOTE: several default settings have changed in version 5.15, please make sure 00:11:03.333 this does not affect your deployments: 00:11:03.333 - DUP for metadata (-m dup) 00:11:03.333 - enabled no-holes (-O no-holes) 00:11:03.333 - enabled free-space-tree (-R free-space-tree) 00:11:03.333 00:11:03.333 Label: (null) 00:11:03.333 UUID: b2dc602b-6d2d-4895-a06e-a270938b8e48 00:11:03.333 Node size: 16384 00:11:03.333 Sector size: 4096 (CPU page size: 4096) 00:11:03.333 Filesystem size: 510.00MiB 00:11:03.333 Block group profiles: 00:11:03.333 Data: single 8.00MiB 00:11:03.333 Metadata: DUP 32.00MiB 00:11:03.333 System: DUP 8.00MiB 00:11:03.333 SSD detected: yes 00:11:03.333 Zoned device: no 00:11:03.333 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:03.333 Checksum: crc32c 00:11:03.333 Number of devices: 1 00:11:03.333 Devices: 00:11:03.333 ID SIZE PATH 00:11:03.333 1 510.00MiB /dev/nvme0n1p1 00:11:03.333 00:11:03.333 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:03.333 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 908300 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:03.333 00:11:03.333 real 0m0.544s 00:11:03.333 user 0m0.027s 00:11:03.333 sys 0m0.118s 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:03.333 ************************************ 00:11:03.333 END TEST filesystem_in_capsule_btrfs 00:11:03.333 ************************************ 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.333 ************************************ 00:11:03.333 START TEST filesystem_in_capsule_xfs 00:11:03.333 ************************************ 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:03.333 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:03.593 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:03.593 = sectsz=512 attr=2, projid32bit=1 00:11:03.593 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:03.593 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:03.593 data = bsize=4096 blocks=130560, imaxpct=25 00:11:03.593 = sunit=0 swidth=0 blks 00:11:03.593 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:03.593 log =internal log bsize=4096 blocks=16384, version=2 00:11:03.593 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:03.593 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:04.535 Discarding blocks...Done. 00:11:04.535 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:04.535 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:06.448 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:06.448 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:06.448 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:06.448 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:06.448 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:06.448 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:06.448 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 908300 00:11:06.448 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:06.448 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:06.448 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:06.448 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:06.448 00:11:06.448 real 0m3.191s 00:11:06.448 user 0m0.027s 00:11:06.448 sys 0m0.079s 00:11:06.448 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.448 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:06.448 ************************************ 00:11:06.448 END TEST filesystem_in_capsule_xfs 00:11:06.448 ************************************ 00:11:06.448 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:06.708 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:06.708 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:06.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.969 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:06.969 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:06.969 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:06.969 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.969 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:06.969 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.969 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:06.969 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:06.969 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.969 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.969 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.969 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:06.969 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 908300 00:11:06.969 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 908300 ']' 00:11:06.969 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 908300 00:11:06.969 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:06.969 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.969 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 908300 00:11:06.969 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.969 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.969 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 908300' 00:11:06.969 killing process with pid 908300 00:11:06.969 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 908300 00:11:06.969 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 908300 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:07.230 00:11:07.230 real 0m20.862s 00:11:07.230 user 1m22.577s 00:11:07.230 sys 0m1.448s 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.230 ************************************ 00:11:07.230 END TEST nvmf_filesystem_in_capsule 00:11:07.230 ************************************ 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:07.230 rmmod nvme_tcp 00:11:07.230 rmmod nvme_fabrics 00:11:07.230 rmmod nvme_keyring 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.230 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:09.778 00:11:09.778 real 0m51.833s 00:11:09.778 user 2m46.302s 00:11:09.778 sys 0m8.833s 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:09.778 ************************************ 00:11:09.778 END TEST nvmf_filesystem 00:11:09.778 ************************************ 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:09.778 ************************************ 00:11:09.778 START TEST nvmf_target_discovery 00:11:09.778 ************************************ 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:09.778 * Looking for test storage... 00:11:09.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:09.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.778 --rc genhtml_branch_coverage=1 00:11:09.778 --rc genhtml_function_coverage=1 00:11:09.778 --rc genhtml_legend=1 00:11:09.778 --rc geninfo_all_blocks=1 00:11:09.778 --rc geninfo_unexecuted_blocks=1 00:11:09.778 00:11:09.778 ' 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:09.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.778 --rc genhtml_branch_coverage=1 00:11:09.778 --rc genhtml_function_coverage=1 00:11:09.778 --rc genhtml_legend=1 00:11:09.778 --rc geninfo_all_blocks=1 00:11:09.778 --rc geninfo_unexecuted_blocks=1 00:11:09.778 00:11:09.778 ' 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:09.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.778 --rc genhtml_branch_coverage=1 00:11:09.778 --rc genhtml_function_coverage=1 00:11:09.778 --rc genhtml_legend=1 00:11:09.778 --rc geninfo_all_blocks=1 00:11:09.778 --rc geninfo_unexecuted_blocks=1 00:11:09.778 00:11:09.778 ' 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:09.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.778 --rc genhtml_branch_coverage=1 00:11:09.778 --rc genhtml_function_coverage=1 00:11:09.778 --rc genhtml_legend=1 00:11:09.778 --rc geninfo_all_blocks=1 00:11:09.778 --rc geninfo_unexecuted_blocks=1 00:11:09.778 00:11:09.778 ' 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.778 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:09.779 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:18.146 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:18.147 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:18.147 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:18.147 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:18.147 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:18.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:11:18.147 00:11:18.147 --- 10.0.0.2 ping statistics --- 00:11:18.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.147 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:18.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:11:18.147 00:11:18.147 --- 10.0.0.1 ping statistics --- 00:11:18.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.147 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=916889 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 916889 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 916889 ']' 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.147 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.147 [2024-10-30 13:58:15.495164] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:11:18.147 [2024-10-30 13:58:15.495239] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.147 [2024-10-30 13:58:15.579298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:18.147 [2024-10-30 13:58:15.631966] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.147 [2024-10-30 13:58:15.632014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.147 [2024-10-30 13:58:15.632025] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.147 [2024-10-30 13:58:15.632034] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.147 [2024-10-30 13:58:15.632042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.147 [2024-10-30 13:58:15.634065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.147 [2024-10-30 13:58:15.634224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.147 [2024-10-30 13:58:15.634383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:18.147 [2024-10-30 13:58:15.634385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.148 [2024-10-30 13:58:16.366535] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.148 Null1 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.148 [2024-10-30 13:58:16.427006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.148 Null2 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.148 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.411 Null3 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.411 Null4 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.411 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:11:18.675 00:11:18.675 Discovery Log Number of Records 6, Generation counter 6 00:11:18.675 =====Discovery Log Entry 0====== 00:11:18.675 trtype: tcp 00:11:18.675 adrfam: ipv4 00:11:18.675 subtype: current discovery subsystem 00:11:18.675 treq: not required 00:11:18.675 portid: 0 00:11:18.675 trsvcid: 4420 00:11:18.675 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:18.675 traddr: 10.0.0.2 00:11:18.675 eflags: explicit discovery connections, duplicate discovery information 00:11:18.675 sectype: none 00:11:18.675 =====Discovery Log Entry 1====== 00:11:18.675 trtype: tcp 00:11:18.675 adrfam: ipv4 00:11:18.675 subtype: nvme subsystem 00:11:18.675 treq: not required 00:11:18.675 portid: 0 00:11:18.675 trsvcid: 4420 00:11:18.675 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:18.675 traddr: 10.0.0.2 00:11:18.675 eflags: none 00:11:18.675 sectype: none 00:11:18.675 =====Discovery Log Entry 2====== 00:11:18.675 trtype: tcp 00:11:18.675 adrfam: ipv4 00:11:18.675 subtype: nvme subsystem 00:11:18.675 treq: not required 00:11:18.675 portid: 0 00:11:18.675 trsvcid: 4420 00:11:18.675 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:18.675 traddr: 10.0.0.2 00:11:18.675 eflags: none 00:11:18.675 sectype: none 00:11:18.675 =====Discovery Log Entry 3====== 00:11:18.675 trtype: tcp 00:11:18.675 adrfam: ipv4 00:11:18.675 subtype: nvme subsystem 00:11:18.675 treq: not required 00:11:18.675 portid: 0 00:11:18.675 trsvcid: 4420 00:11:18.675 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:18.675 traddr: 10.0.0.2 00:11:18.675 eflags: none 00:11:18.675 sectype: none 00:11:18.675 =====Discovery Log Entry 4====== 00:11:18.675 trtype: tcp 00:11:18.675 adrfam: ipv4 00:11:18.675 subtype: nvme subsystem 00:11:18.675 treq: not required 00:11:18.675 portid: 0 00:11:18.675 trsvcid: 4420 00:11:18.675 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:18.675 traddr: 10.0.0.2 00:11:18.675 eflags: none 00:11:18.675 sectype: none 00:11:18.675 =====Discovery Log Entry 5====== 00:11:18.675 trtype: tcp 00:11:18.675 adrfam: ipv4 00:11:18.675 subtype: discovery subsystem referral 00:11:18.675 treq: not required 00:11:18.675 portid: 0 00:11:18.675 trsvcid: 4430 00:11:18.675 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:18.675 traddr: 10.0.0.2 00:11:18.675 eflags: none 00:11:18.675 sectype: none 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:18.675 Perform nvmf subsystem discovery via RPC 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.675 [ 00:11:18.675 { 00:11:18.675 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:18.675 "subtype": "Discovery", 00:11:18.675 "listen_addresses": [ 00:11:18.675 { 00:11:18.675 "trtype": "TCP", 00:11:18.675 "adrfam": "IPv4", 00:11:18.675 "traddr": "10.0.0.2", 00:11:18.675 "trsvcid": "4420" 00:11:18.675 } 00:11:18.675 ], 00:11:18.675 "allow_any_host": true, 00:11:18.675 "hosts": [] 00:11:18.675 }, 00:11:18.675 { 00:11:18.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:18.675 "subtype": "NVMe", 00:11:18.675 "listen_addresses": [ 00:11:18.675 { 00:11:18.675 "trtype": "TCP", 00:11:18.675 "adrfam": "IPv4", 00:11:18.675 "traddr": "10.0.0.2", 00:11:18.675 "trsvcid": "4420" 00:11:18.675 } 00:11:18.675 ], 00:11:18.675 "allow_any_host": true, 00:11:18.675 "hosts": [], 00:11:18.675 "serial_number": "SPDK00000000000001", 00:11:18.675 "model_number": "SPDK bdev Controller", 00:11:18.675 "max_namespaces": 32, 00:11:18.675 "min_cntlid": 1, 00:11:18.675 "max_cntlid": 65519, 00:11:18.675 "namespaces": [ 00:11:18.675 { 00:11:18.675 "nsid": 1, 00:11:18.675 "bdev_name": "Null1", 00:11:18.675 "name": "Null1", 00:11:18.675 "nguid": "01B73109EB7D4A859A5BA4D2701204BB", 00:11:18.675 "uuid": "01b73109-eb7d-4a85-9a5b-a4d2701204bb" 00:11:18.675 } 00:11:18.675 ] 00:11:18.675 }, 00:11:18.675 { 00:11:18.675 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:18.675 "subtype": "NVMe", 00:11:18.675 "listen_addresses": [ 00:11:18.675 { 00:11:18.675 "trtype": "TCP", 00:11:18.675 "adrfam": "IPv4", 00:11:18.675 "traddr": "10.0.0.2", 00:11:18.675 "trsvcid": "4420" 00:11:18.675 } 00:11:18.675 ], 00:11:18.675 "allow_any_host": true, 00:11:18.675 "hosts": [], 00:11:18.675 "serial_number": "SPDK00000000000002", 00:11:18.675 "model_number": "SPDK bdev Controller", 00:11:18.675 "max_namespaces": 32, 00:11:18.675 "min_cntlid": 1, 00:11:18.675 "max_cntlid": 65519, 00:11:18.675 "namespaces": [ 00:11:18.675 { 00:11:18.675 "nsid": 1, 00:11:18.675 "bdev_name": "Null2", 00:11:18.675 "name": "Null2", 00:11:18.675 "nguid": "18DB18C78EBA425EA077ECEF40CF4EE4", 00:11:18.675 "uuid": "18db18c7-8eba-425e-a077-ecef40cf4ee4" 00:11:18.675 } 00:11:18.675 ] 00:11:18.675 }, 00:11:18.675 { 00:11:18.675 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:18.675 "subtype": "NVMe", 00:11:18.675 "listen_addresses": [ 00:11:18.675 { 00:11:18.675 "trtype": "TCP", 00:11:18.675 "adrfam": "IPv4", 00:11:18.675 "traddr": "10.0.0.2", 00:11:18.675 "trsvcid": "4420" 00:11:18.675 } 00:11:18.675 ], 00:11:18.675 "allow_any_host": true, 00:11:18.675 "hosts": [], 00:11:18.675 "serial_number": "SPDK00000000000003", 00:11:18.675 "model_number": "SPDK bdev Controller", 00:11:18.675 "max_namespaces": 32, 00:11:18.675 "min_cntlid": 1, 00:11:18.675 "max_cntlid": 65519, 00:11:18.675 "namespaces": [ 00:11:18.675 { 00:11:18.675 "nsid": 1, 00:11:18.675 "bdev_name": "Null3", 00:11:18.675 "name": "Null3", 00:11:18.675 "nguid": "5AB5B91B991F4B6D84C94E07A055E323", 00:11:18.675 "uuid": "5ab5b91b-991f-4b6d-84c9-4e07a055e323" 00:11:18.675 } 00:11:18.675 ] 00:11:18.675 }, 00:11:18.675 { 00:11:18.675 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:18.675 "subtype": "NVMe", 00:11:18.675 "listen_addresses": [ 00:11:18.675 { 00:11:18.675 "trtype": "TCP", 00:11:18.675 "adrfam": "IPv4", 00:11:18.675 "traddr": "10.0.0.2", 00:11:18.675 "trsvcid": "4420" 00:11:18.675 } 00:11:18.675 ], 00:11:18.675 "allow_any_host": true, 00:11:18.675 "hosts": [], 00:11:18.675 "serial_number": "SPDK00000000000004", 00:11:18.675 "model_number": "SPDK bdev Controller", 00:11:18.675 "max_namespaces": 32, 00:11:18.675 "min_cntlid": 1, 00:11:18.675 "max_cntlid": 65519, 00:11:18.675 "namespaces": [ 00:11:18.675 { 00:11:18.675 "nsid": 1, 00:11:18.675 "bdev_name": "Null4", 00:11:18.675 "name": "Null4", 00:11:18.675 "nguid": "E7F96E506D434436BA5E0CE22101B0B6", 00:11:18.675 "uuid": "e7f96e50-6d43-4436-ba5e-0ce22101b0b6" 00:11:18.675 } 00:11:18.675 ] 00:11:18.675 } 00:11:18.675 ] 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.675 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:18.676 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:18.676 rmmod nvme_tcp 00:11:18.676 rmmod nvme_fabrics 00:11:18.937 rmmod nvme_keyring 00:11:18.937 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:18.937 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:18.937 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:18.937 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 916889 ']' 00:11:18.937 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 916889 00:11:18.937 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 916889 ']' 00:11:18.937 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 916889 00:11:18.937 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:18.937 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:18.937 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 916889 00:11:18.937 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:18.937 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:18.937 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 916889' 00:11:18.937 killing process with pid 916889 00:11:18.937 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 916889 00:11:18.937 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 916889 00:11:18.937 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:18.938 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:18.938 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:18.938 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:18.938 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:19.199 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:19.199 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:19.199 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:19.199 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:19.199 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.199 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.199 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.114 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:21.114 00:11:21.114 real 0m11.662s 00:11:21.114 user 0m8.695s 00:11:21.114 sys 0m6.216s 00:11:21.114 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.114 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.114 ************************************ 00:11:21.114 END TEST nvmf_target_discovery 00:11:21.114 ************************************ 00:11:21.114 13:58:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:21.114 13:58:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:21.114 13:58:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.114 13:58:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:21.114 ************************************ 00:11:21.114 START TEST nvmf_referrals 00:11:21.114 ************************************ 00:11:21.114 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:21.376 * Looking for test storage... 00:11:21.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:21.376 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:21.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.377 --rc genhtml_branch_coverage=1 00:11:21.377 --rc genhtml_function_coverage=1 00:11:21.377 --rc genhtml_legend=1 00:11:21.377 --rc geninfo_all_blocks=1 00:11:21.377 --rc geninfo_unexecuted_blocks=1 00:11:21.377 00:11:21.377 ' 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:21.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.377 --rc genhtml_branch_coverage=1 00:11:21.377 --rc genhtml_function_coverage=1 00:11:21.377 --rc genhtml_legend=1 00:11:21.377 --rc geninfo_all_blocks=1 00:11:21.377 --rc geninfo_unexecuted_blocks=1 00:11:21.377 00:11:21.377 ' 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:21.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.377 --rc genhtml_branch_coverage=1 00:11:21.377 --rc genhtml_function_coverage=1 00:11:21.377 --rc genhtml_legend=1 00:11:21.377 --rc geninfo_all_blocks=1 00:11:21.377 --rc geninfo_unexecuted_blocks=1 00:11:21.377 00:11:21.377 ' 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:21.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.377 --rc genhtml_branch_coverage=1 00:11:21.377 --rc genhtml_function_coverage=1 00:11:21.377 --rc genhtml_legend=1 00:11:21.377 --rc geninfo_all_blocks=1 00:11:21.377 --rc geninfo_unexecuted_blocks=1 00:11:21.377 00:11:21.377 ' 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:21.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:21.377 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:21.378 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:21.378 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:21.378 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:21.378 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:21.378 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:21.378 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:21.378 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:21.378 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.378 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:21.378 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:21.378 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:21.378 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.378 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.378 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.378 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:21.378 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:21.378 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:21.378 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:29.522 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:29.522 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:29.522 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:29.522 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:29.522 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:29.522 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:29.522 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:29.522 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:29.522 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:29.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:11:29.522 00:11:29.522 --- 10.0.0.2 ping statistics --- 00:11:29.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.522 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:11:29.522 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:29.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:11:29.522 00:11:29.522 --- 10.0.0.1 ping statistics --- 00:11:29.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.522 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:11:29.522 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.522 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:29.522 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:29.522 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.522 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:29.522 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:29.522 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.522 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:29.522 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:29.522 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:29.523 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:29.523 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:29.523 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.523 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=921285 00:11:29.523 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 921285 00:11:29.523 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:29.523 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 921285 ']' 00:11:29.523 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.523 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.523 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.523 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.523 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.523 [2024-10-30 13:58:27.169867] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:11:29.523 [2024-10-30 13:58:27.169938] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.523 [2024-10-30 13:58:27.270176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:29.523 [2024-10-30 13:58:27.323638] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.523 [2024-10-30 13:58:27.323693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.523 [2024-10-30 13:58:27.323705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.523 [2024-10-30 13:58:27.323714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.523 [2024-10-30 13:58:27.323729] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.523 [2024-10-30 13:58:27.325779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.523 [2024-10-30 13:58:27.325898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.523 [2024-10-30 13:58:27.326052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:29.523 [2024-10-30 13:58:27.326055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.784 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.784 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:29.784 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:29.784 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:29.784 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.784 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.784 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:29.784 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.784 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.784 [2024-10-30 13:58:28.054422] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.784 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.784 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:29.784 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.784 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.784 [2024-10-30 13:58:28.070766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:29.784 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.784 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:29.784 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.784 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.045 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.045 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:30.045 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.045 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:30.046 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:30.307 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:30.568 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:30.830 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:30.830 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:30.830 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:30.830 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:30.830 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:30.830 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:30.831 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:31.092 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:31.354 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:31.354 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:31.354 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:31.354 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:31.354 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:31.354 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.354 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:31.614 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:31.614 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:31.614 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:31.614 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:31.614 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.614 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:31.875 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:31.875 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:31.875 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.875 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.875 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.875 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:31.875 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:31.875 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.875 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.875 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.875 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:31.875 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:31.875 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:31.875 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:31.875 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.875 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:31.875 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:32.135 rmmod nvme_tcp 00:11:32.135 rmmod nvme_fabrics 00:11:32.135 rmmod nvme_keyring 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 921285 ']' 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 921285 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 921285 ']' 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 921285 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 921285 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 921285' 00:11:32.135 killing process with pid 921285 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 921285 00:11:32.135 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 921285 00:11:32.396 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:32.396 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:32.396 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:32.396 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:32.396 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:32.396 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:32.396 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:32.396 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:32.396 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:32.396 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.396 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.396 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.308 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:34.309 00:11:34.309 real 0m13.140s 00:11:34.309 user 0m15.648s 00:11:34.309 sys 0m6.532s 00:11:34.309 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.309 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.309 ************************************ 00:11:34.309 END TEST nvmf_referrals 00:11:34.309 ************************************ 00:11:34.309 13:58:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:34.309 13:58:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.309 13:58:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.309 13:58:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:34.571 ************************************ 00:11:34.571 START TEST nvmf_connect_disconnect 00:11:34.571 ************************************ 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:34.571 * Looking for test storage... 00:11:34.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:34.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.571 --rc genhtml_branch_coverage=1 00:11:34.571 --rc genhtml_function_coverage=1 00:11:34.571 --rc genhtml_legend=1 00:11:34.571 --rc geninfo_all_blocks=1 00:11:34.571 --rc geninfo_unexecuted_blocks=1 00:11:34.571 00:11:34.571 ' 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:34.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.571 --rc genhtml_branch_coverage=1 00:11:34.571 --rc genhtml_function_coverage=1 00:11:34.571 --rc genhtml_legend=1 00:11:34.571 --rc geninfo_all_blocks=1 00:11:34.571 --rc geninfo_unexecuted_blocks=1 00:11:34.571 00:11:34.571 ' 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:34.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.571 --rc genhtml_branch_coverage=1 00:11:34.571 --rc genhtml_function_coverage=1 00:11:34.571 --rc genhtml_legend=1 00:11:34.571 --rc geninfo_all_blocks=1 00:11:34.571 --rc geninfo_unexecuted_blocks=1 00:11:34.571 00:11:34.571 ' 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:34.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.571 --rc genhtml_branch_coverage=1 00:11:34.571 --rc genhtml_function_coverage=1 00:11:34.571 --rc genhtml_legend=1 00:11:34.571 --rc geninfo_all_blocks=1 00:11:34.571 --rc geninfo_unexecuted_blocks=1 00:11:34.571 00:11:34.571 ' 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.571 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.572 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.572 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.572 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.833 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:34.833 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:34.833 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:34.833 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:34.833 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.833 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:34.833 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:34.833 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:34.833 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.833 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.833 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.833 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:34.833 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:34.833 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:34.833 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:42.977 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:42.977 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:42.977 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:42.977 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:42.977 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:42.977 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:42.977 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:42.977 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:42.977 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:42.977 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:42.977 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:42.977 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:42.977 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:42.977 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:42.977 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:42.977 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.977 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.977 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.977 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.977 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.977 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:42.977 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:42.977 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:42.977 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:42.978 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:42.978 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:42.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:11:42.978 00:11:42.978 --- 10.0.0.2 ping statistics --- 00:11:42.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.978 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:42.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:11:42.978 00:11:42.978 --- 10.0.0.1 ping statistics --- 00:11:42.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.978 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=926354 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 926354 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 926354 ']' 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.978 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:42.978 [2024-10-30 13:58:40.426414] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:11:42.978 [2024-10-30 13:58:40.426488] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.978 [2024-10-30 13:58:40.527899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:42.978 [2024-10-30 13:58:40.580623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.979 [2024-10-30 13:58:40.580679] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.979 [2024-10-30 13:58:40.580691] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.979 [2024-10-30 13:58:40.580701] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.979 [2024-10-30 13:58:40.580709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.979 [2024-10-30 13:58:40.582821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.979 [2024-10-30 13:58:40.583072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.979 [2024-10-30 13:58:40.583075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.979 [2024-10-30 13:58:40.582893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.979 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.979 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:42.979 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:42.979 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:42.979 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:43.241 [2024-10-30 13:58:41.299566] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:43.241 [2024-10-30 13:58:41.383397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:43.241 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:47.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.557 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:01.557 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:01.557 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:01.557 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:01.557 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:01.557 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:01.557 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:01.557 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:01.557 rmmod nvme_tcp 00:12:01.557 rmmod nvme_fabrics 00:12:01.557 rmmod nvme_keyring 00:12:01.557 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:01.557 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:01.557 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:01.557 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 926354 ']' 00:12:01.557 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 926354 00:12:01.557 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 926354 ']' 00:12:01.557 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 926354 00:12:01.557 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:01.557 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:01.557 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 926354 00:12:01.557 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:01.557 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:01.557 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 926354' 00:12:01.557 killing process with pid 926354 00:12:01.557 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 926354 00:12:01.557 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 926354 00:12:01.818 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:01.818 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:01.818 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:01.818 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:01.818 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:01.818 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:01.818 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:01.818 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:01.818 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:01.818 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.818 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.818 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.734 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:03.734 00:12:03.734 real 0m29.377s 00:12:03.734 user 1m19.213s 00:12:03.734 sys 0m7.132s 00:12:03.734 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.734 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:03.734 ************************************ 00:12:03.734 END TEST nvmf_connect_disconnect 00:12:03.734 ************************************ 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:03.996 ************************************ 00:12:03.996 START TEST nvmf_multitarget 00:12:03.996 ************************************ 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:03.996 * Looking for test storage... 00:12:03.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.996 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:03.997 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.997 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:03.997 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:03.997 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.997 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:03.997 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.997 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.997 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.997 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:03.997 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.997 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:03.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.997 --rc genhtml_branch_coverage=1 00:12:03.997 --rc genhtml_function_coverage=1 00:12:03.997 --rc genhtml_legend=1 00:12:03.997 --rc geninfo_all_blocks=1 00:12:03.997 --rc geninfo_unexecuted_blocks=1 00:12:03.997 00:12:03.997 ' 00:12:03.997 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:03.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.997 --rc genhtml_branch_coverage=1 00:12:03.997 --rc genhtml_function_coverage=1 00:12:03.997 --rc genhtml_legend=1 00:12:03.997 --rc geninfo_all_blocks=1 00:12:03.997 --rc geninfo_unexecuted_blocks=1 00:12:03.997 00:12:03.997 ' 00:12:03.997 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:03.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.997 --rc genhtml_branch_coverage=1 00:12:03.997 --rc genhtml_function_coverage=1 00:12:03.997 --rc genhtml_legend=1 00:12:03.997 --rc geninfo_all_blocks=1 00:12:03.997 --rc geninfo_unexecuted_blocks=1 00:12:03.997 00:12:03.997 ' 00:12:03.997 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:03.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.997 --rc genhtml_branch_coverage=1 00:12:03.997 --rc genhtml_function_coverage=1 00:12:03.997 --rc genhtml_legend=1 00:12:03.997 --rc geninfo_all_blocks=1 00:12:03.997 --rc geninfo_unexecuted_blocks=1 00:12:03.997 00:12:03.997 ' 00:12:03.997 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:04.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.259 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:04.260 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:04.260 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:04.260 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.260 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.260 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.260 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:04.260 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:04.260 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:04.260 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:12.430 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:12.430 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:12.430 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:12.431 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:12.431 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:12.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:12.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:12:12.431 00:12:12.431 --- 10.0.0.2 ping statistics --- 00:12:12.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.431 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:12.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:12.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:12:12.431 00:12:12.431 --- 10.0.0.1 ping statistics --- 00:12:12.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.431 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=934368 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 934368 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 934368 ']' 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.431 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:12.431 [2024-10-30 13:59:09.891234] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:12:12.431 [2024-10-30 13:59:09.891300] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.431 [2024-10-30 13:59:09.989217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.431 [2024-10-30 13:59:10.047487] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.431 [2024-10-30 13:59:10.047547] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.431 [2024-10-30 13:59:10.047561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.431 [2024-10-30 13:59:10.047572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.431 [2024-10-30 13:59:10.047581] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.431 [2024-10-30 13:59:10.049622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.431 [2024-10-30 13:59:10.049799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.431 [2024-10-30 13:59:10.049913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:12.431 [2024-10-30 13:59:10.049913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.431 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.431 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:12.431 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:12.432 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:12.432 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:12.692 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.692 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:12.692 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:12.692 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:12.692 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:12.692 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:12.692 "nvmf_tgt_1" 00:12:12.953 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:12.953 "nvmf_tgt_2" 00:12:12.953 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:12.953 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:12.953 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:12.953 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:13.213 true 00:12:13.213 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:13.213 true 00:12:13.213 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:13.213 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:13.474 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:13.474 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:13.474 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:13.474 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:13.474 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:13.474 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:13.474 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:13.474 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:13.474 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:13.474 rmmod nvme_tcp 00:12:13.474 rmmod nvme_fabrics 00:12:13.474 rmmod nvme_keyring 00:12:13.474 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:13.474 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:13.474 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:13.474 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 934368 ']' 00:12:13.474 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 934368 00:12:13.474 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 934368 ']' 00:12:13.474 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 934368 00:12:13.474 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:13.474 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:13.474 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 934368 00:12:13.474 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:13.474 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:13.474 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 934368' 00:12:13.474 killing process with pid 934368 00:12:13.474 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 934368 00:12:13.474 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 934368 00:12:13.734 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:13.734 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:13.734 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:13.734 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:13.734 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:13.734 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:13.734 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:13.734 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:13.734 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:13.734 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.734 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.734 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.648 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:15.910 00:12:15.910 real 0m11.866s 00:12:15.910 user 0m10.279s 00:12:15.910 sys 0m6.201s 00:12:15.910 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.910 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:15.910 ************************************ 00:12:15.910 END TEST nvmf_multitarget 00:12:15.910 ************************************ 00:12:15.910 13:59:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:15.910 13:59:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:15.910 13:59:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.910 13:59:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:15.910 ************************************ 00:12:15.910 START TEST nvmf_rpc 00:12:15.910 ************************************ 00:12:15.910 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:15.910 * Looking for test storage... 00:12:15.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.910 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:15.910 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:12:15.910 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:16.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.173 --rc genhtml_branch_coverage=1 00:12:16.173 --rc genhtml_function_coverage=1 00:12:16.173 --rc genhtml_legend=1 00:12:16.173 --rc geninfo_all_blocks=1 00:12:16.173 --rc geninfo_unexecuted_blocks=1 00:12:16.173 00:12:16.173 ' 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:16.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.173 --rc genhtml_branch_coverage=1 00:12:16.173 --rc genhtml_function_coverage=1 00:12:16.173 --rc genhtml_legend=1 00:12:16.173 --rc geninfo_all_blocks=1 00:12:16.173 --rc geninfo_unexecuted_blocks=1 00:12:16.173 00:12:16.173 ' 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:16.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.173 --rc genhtml_branch_coverage=1 00:12:16.173 --rc genhtml_function_coverage=1 00:12:16.173 --rc genhtml_legend=1 00:12:16.173 --rc geninfo_all_blocks=1 00:12:16.173 --rc geninfo_unexecuted_blocks=1 00:12:16.173 00:12:16.173 ' 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:16.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.173 --rc genhtml_branch_coverage=1 00:12:16.173 --rc genhtml_function_coverage=1 00:12:16.173 --rc genhtml_legend=1 00:12:16.173 --rc geninfo_all_blocks=1 00:12:16.173 --rc geninfo_unexecuted_blocks=1 00:12:16.173 00:12:16.173 ' 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:16.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:16.173 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:24.330 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:24.330 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:24.330 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:24.331 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:24.331 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:24.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:24.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:12:24.331 00:12:24.331 --- 10.0.0.2 ping statistics --- 00:12:24.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.331 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:24.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:24.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:12:24.331 00:12:24.331 --- 10.0.0.1 ping statistics --- 00:12:24.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.331 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=938858 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 938858 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 938858 ']' 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:24.331 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.331 [2024-10-30 13:59:21.819901] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:12:24.331 [2024-10-30 13:59:21.819969] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.331 [2024-10-30 13:59:21.924258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:24.331 [2024-10-30 13:59:21.977498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.331 [2024-10-30 13:59:21.977558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.331 [2024-10-30 13:59:21.977569] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.331 [2024-10-30 13:59:21.977578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.331 [2024-10-30 13:59:21.977586] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.331 [2024-10-30 13:59:21.979711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.331 [2024-10-30 13:59:21.979872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.331 [2024-10-30 13:59:21.979925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.331 [2024-10-30 13:59:21.979927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:24.593 "tick_rate": 2400000000, 00:12:24.593 "poll_groups": [ 00:12:24.593 { 00:12:24.593 "name": "nvmf_tgt_poll_group_000", 00:12:24.593 "admin_qpairs": 0, 00:12:24.593 "io_qpairs": 0, 00:12:24.593 "current_admin_qpairs": 0, 00:12:24.593 "current_io_qpairs": 0, 00:12:24.593 "pending_bdev_io": 0, 00:12:24.593 "completed_nvme_io": 0, 00:12:24.593 "transports": [] 00:12:24.593 }, 00:12:24.593 { 00:12:24.593 "name": "nvmf_tgt_poll_group_001", 00:12:24.593 "admin_qpairs": 0, 00:12:24.593 "io_qpairs": 0, 00:12:24.593 "current_admin_qpairs": 0, 00:12:24.593 "current_io_qpairs": 0, 00:12:24.593 "pending_bdev_io": 0, 00:12:24.593 "completed_nvme_io": 0, 00:12:24.593 "transports": [] 00:12:24.593 }, 00:12:24.593 { 00:12:24.593 "name": "nvmf_tgt_poll_group_002", 00:12:24.593 "admin_qpairs": 0, 00:12:24.593 "io_qpairs": 0, 00:12:24.593 "current_admin_qpairs": 0, 00:12:24.593 "current_io_qpairs": 0, 00:12:24.593 "pending_bdev_io": 0, 00:12:24.593 "completed_nvme_io": 0, 00:12:24.593 "transports": [] 00:12:24.593 }, 00:12:24.593 { 00:12:24.593 "name": "nvmf_tgt_poll_group_003", 00:12:24.593 "admin_qpairs": 0, 00:12:24.593 "io_qpairs": 0, 00:12:24.593 "current_admin_qpairs": 0, 00:12:24.593 "current_io_qpairs": 0, 00:12:24.593 "pending_bdev_io": 0, 00:12:24.593 "completed_nvme_io": 0, 00:12:24.593 "transports": [] 00:12:24.593 } 00:12:24.593 ] 00:12:24.593 }' 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.593 [2024-10-30 13:59:22.824872] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:24.593 "tick_rate": 2400000000, 00:12:24.593 "poll_groups": [ 00:12:24.593 { 00:12:24.593 "name": "nvmf_tgt_poll_group_000", 00:12:24.593 "admin_qpairs": 0, 00:12:24.593 "io_qpairs": 0, 00:12:24.593 "current_admin_qpairs": 0, 00:12:24.593 "current_io_qpairs": 0, 00:12:24.593 "pending_bdev_io": 0, 00:12:24.593 "completed_nvme_io": 0, 00:12:24.593 "transports": [ 00:12:24.593 { 00:12:24.593 "trtype": "TCP" 00:12:24.593 } 00:12:24.593 ] 00:12:24.593 }, 00:12:24.593 { 00:12:24.593 "name": "nvmf_tgt_poll_group_001", 00:12:24.593 "admin_qpairs": 0, 00:12:24.593 "io_qpairs": 0, 00:12:24.593 "current_admin_qpairs": 0, 00:12:24.593 "current_io_qpairs": 0, 00:12:24.593 "pending_bdev_io": 0, 00:12:24.593 "completed_nvme_io": 0, 00:12:24.593 "transports": [ 00:12:24.593 { 00:12:24.593 "trtype": "TCP" 00:12:24.593 } 00:12:24.593 ] 00:12:24.593 }, 00:12:24.593 { 00:12:24.593 "name": "nvmf_tgt_poll_group_002", 00:12:24.593 "admin_qpairs": 0, 00:12:24.593 "io_qpairs": 0, 00:12:24.593 "current_admin_qpairs": 0, 00:12:24.593 "current_io_qpairs": 0, 00:12:24.593 "pending_bdev_io": 0, 00:12:24.593 "completed_nvme_io": 0, 00:12:24.593 "transports": [ 00:12:24.593 { 00:12:24.593 "trtype": "TCP" 00:12:24.593 } 00:12:24.593 ] 00:12:24.593 }, 00:12:24.593 { 00:12:24.593 "name": "nvmf_tgt_poll_group_003", 00:12:24.593 "admin_qpairs": 0, 00:12:24.593 "io_qpairs": 0, 00:12:24.593 "current_admin_qpairs": 0, 00:12:24.593 "current_io_qpairs": 0, 00:12:24.593 "pending_bdev_io": 0, 00:12:24.593 "completed_nvme_io": 0, 00:12:24.593 "transports": [ 00:12:24.593 { 00:12:24.593 "trtype": "TCP" 00:12:24.593 } 00:12:24.593 ] 00:12:24.593 } 00:12:24.593 ] 00:12:24.593 }' 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:24.593 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:24.856 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:24.856 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:24.856 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:24.856 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:24.856 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:24.856 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:24.856 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:24.856 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:24.856 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:24.856 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:24.856 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.856 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.856 Malloc1 00:12:24.856 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.856 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:24.856 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.856 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.856 [2024-10-30 13:59:23.034397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:24.856 [2024-10-30 13:59:23.071576] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:24.856 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:24.856 could not add new controller: failed to write to nvme-fabrics device 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.856 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:26.774 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:26.774 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:26.774 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:26.774 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:26.774 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:28.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:28.688 [2024-10-30 13:59:26.809380] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:28.688 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:28.688 could not add new controller: failed to write to nvme-fabrics device 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.688 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:30.071 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:30.071 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:30.071 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:30.071 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:30.071 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:32.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.617 [2024-10-30 13:59:30.528547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.617 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:32.618 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.618 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.618 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.618 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:32.618 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.618 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.618 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.618 13:59:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.002 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:34.002 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:34.002 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.002 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:34.002 13:59:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:35.916 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:35.916 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:35.916 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.916 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:35.916 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.916 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:35.916 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.177 [2024-10-30 13:59:34.299179] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.177 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.096 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.096 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:38.096 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.096 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:38.096 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:40.012 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:40.012 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:40.012 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.012 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:40.012 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.012 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:40.012 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.012 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.012 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:40.012 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:40.012 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.012 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:40.013 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.013 [2024-10-30 13:59:38.053329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.013 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.400 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.400 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:41.400 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.400 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:41.400 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:43.317 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:43.317 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:43.317 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.317 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:43.317 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.317 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:43.317 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.578 [2024-10-30 13:59:41.872622] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:43.578 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.840 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.840 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.840 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.840 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.840 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.840 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.840 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.228 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:45.228 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:45.228 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.228 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:45.228 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.775 [2024-10-30 13:59:45.639692] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.775 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.160 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.160 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:49.160 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.160 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:49.160 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:51.075 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:51.075 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:51.075 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.075 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:51.075 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.076 [2024-10-30 13:59:49.364130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.076 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.337 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.337 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.337 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.337 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.337 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.337 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.338 [2024-10-30 13:59:49.432287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.338 [2024-10-30 13:59:49.500463] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.338 [2024-10-30 13:59:49.564676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.338 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.338 [2024-10-30 13:59:49.632890] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:51.599 "tick_rate": 2400000000, 00:12:51.599 "poll_groups": [ 00:12:51.599 { 00:12:51.599 "name": "nvmf_tgt_poll_group_000", 00:12:51.599 "admin_qpairs": 0, 00:12:51.599 "io_qpairs": 224, 00:12:51.599 "current_admin_qpairs": 0, 00:12:51.599 "current_io_qpairs": 0, 00:12:51.599 "pending_bdev_io": 0, 00:12:51.599 "completed_nvme_io": 224, 00:12:51.599 "transports": [ 00:12:51.599 { 00:12:51.599 "trtype": "TCP" 00:12:51.599 } 00:12:51.599 ] 00:12:51.599 }, 00:12:51.599 { 00:12:51.599 "name": "nvmf_tgt_poll_group_001", 00:12:51.599 "admin_qpairs": 1, 00:12:51.599 "io_qpairs": 223, 00:12:51.599 "current_admin_qpairs": 0, 00:12:51.599 "current_io_qpairs": 0, 00:12:51.599 "pending_bdev_io": 0, 00:12:51.599 "completed_nvme_io": 272, 00:12:51.599 "transports": [ 00:12:51.599 { 00:12:51.599 "trtype": "TCP" 00:12:51.599 } 00:12:51.599 ] 00:12:51.599 }, 00:12:51.599 { 00:12:51.599 "name": "nvmf_tgt_poll_group_002", 00:12:51.599 "admin_qpairs": 6, 00:12:51.599 "io_qpairs": 218, 00:12:51.599 "current_admin_qpairs": 0, 00:12:51.599 "current_io_qpairs": 0, 00:12:51.599 "pending_bdev_io": 0, 00:12:51.599 "completed_nvme_io": 513, 00:12:51.599 "transports": [ 00:12:51.599 { 00:12:51.599 "trtype": "TCP" 00:12:51.599 } 00:12:51.599 ] 00:12:51.599 }, 00:12:51.599 { 00:12:51.599 "name": "nvmf_tgt_poll_group_003", 00:12:51.599 "admin_qpairs": 0, 00:12:51.599 "io_qpairs": 224, 00:12:51.599 "current_admin_qpairs": 0, 00:12:51.599 "current_io_qpairs": 0, 00:12:51.599 "pending_bdev_io": 0, 00:12:51.599 "completed_nvme_io": 230, 00:12:51.599 "transports": [ 00:12:51.599 { 00:12:51.599 "trtype": "TCP" 00:12:51.599 } 00:12:51.599 ] 00:12:51.599 } 00:12:51.599 ] 00:12:51.599 }' 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:51.599 rmmod nvme_tcp 00:12:51.599 rmmod nvme_fabrics 00:12:51.599 rmmod nvme_keyring 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 938858 ']' 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 938858 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 938858 ']' 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 938858 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:51.599 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 938858 00:12:51.860 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:51.860 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:51.860 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 938858' 00:12:51.860 killing process with pid 938858 00:12:51.860 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 938858 00:12:51.860 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 938858 00:12:51.860 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:51.860 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:51.860 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:51.860 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:51.860 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:51.860 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:51.860 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:51.860 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:51.860 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:51.860 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.860 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:51.860 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.409 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:54.409 00:12:54.409 real 0m38.092s 00:12:54.409 user 1m54.292s 00:12:54.409 sys 0m7.801s 00:12:54.409 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.409 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.409 ************************************ 00:12:54.409 END TEST nvmf_rpc 00:12:54.409 ************************************ 00:12:54.409 13:59:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:54.409 13:59:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:54.409 13:59:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.409 13:59:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:54.409 ************************************ 00:12:54.409 START TEST nvmf_invalid 00:12:54.409 ************************************ 00:12:54.409 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:54.409 * Looking for test storage... 00:12:54.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.409 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:54.409 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:12:54.409 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:54.409 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:54.409 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:54.409 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:54.409 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:54.409 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:54.409 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:54.409 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:54.409 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:54.409 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:54.409 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:54.409 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:54.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.410 --rc genhtml_branch_coverage=1 00:12:54.410 --rc genhtml_function_coverage=1 00:12:54.410 --rc genhtml_legend=1 00:12:54.410 --rc geninfo_all_blocks=1 00:12:54.410 --rc geninfo_unexecuted_blocks=1 00:12:54.410 00:12:54.410 ' 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:54.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.410 --rc genhtml_branch_coverage=1 00:12:54.410 --rc genhtml_function_coverage=1 00:12:54.410 --rc genhtml_legend=1 00:12:54.410 --rc geninfo_all_blocks=1 00:12:54.410 --rc geninfo_unexecuted_blocks=1 00:12:54.410 00:12:54.410 ' 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:54.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.410 --rc genhtml_branch_coverage=1 00:12:54.410 --rc genhtml_function_coverage=1 00:12:54.410 --rc genhtml_legend=1 00:12:54.410 --rc geninfo_all_blocks=1 00:12:54.410 --rc geninfo_unexecuted_blocks=1 00:12:54.410 00:12:54.410 ' 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:54.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.410 --rc genhtml_branch_coverage=1 00:12:54.410 --rc genhtml_function_coverage=1 00:12:54.410 --rc genhtml_legend=1 00:12:54.410 --rc geninfo_all_blocks=1 00:12:54.410 --rc geninfo_unexecuted_blocks=1 00:12:54.410 00:12:54.410 ' 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:54.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:54.410 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:02.560 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:02.561 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:02.561 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:02.561 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:02.561 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:02.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:13:02.561 00:13:02.561 --- 10.0.0.2 ping statistics --- 00:13:02.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.561 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:02.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:13:02.561 00:13:02.561 --- 10.0.0.1 ping statistics --- 00:13:02.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.561 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=948712 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 948712 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 948712 ']' 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.561 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:02.561 [2024-10-30 14:00:00.028235] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:13:02.561 [2024-10-30 14:00:00.028312] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.561 [2024-10-30 14:00:00.131149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.561 [2024-10-30 14:00:00.187835] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.561 [2024-10-30 14:00:00.187893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.561 [2024-10-30 14:00:00.187905] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.561 [2024-10-30 14:00:00.187916] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.561 [2024-10-30 14:00:00.187925] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.561 [2024-10-30 14:00:00.189939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.561 [2024-10-30 14:00:00.190071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.561 [2024-10-30 14:00:00.190236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.561 [2024-10-30 14:00:00.190239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.561 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.561 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:02.561 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:02.562 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:02.562 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:02.823 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.823 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:02.823 14:00:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7034 00:13:02.823 [2024-10-30 14:00:01.063521] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:02.823 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:02.823 { 00:13:02.823 "nqn": "nqn.2016-06.io.spdk:cnode7034", 00:13:02.823 "tgt_name": "foobar", 00:13:02.823 "method": "nvmf_create_subsystem", 00:13:02.823 "req_id": 1 00:13:02.823 } 00:13:02.823 Got JSON-RPC error response 00:13:02.823 response: 00:13:02.823 { 00:13:02.823 "code": -32603, 00:13:02.823 "message": "Unable to find target foobar" 00:13:02.823 }' 00:13:02.823 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:02.823 { 00:13:02.823 "nqn": "nqn.2016-06.io.spdk:cnode7034", 00:13:02.823 "tgt_name": "foobar", 00:13:02.823 "method": "nvmf_create_subsystem", 00:13:02.823 "req_id": 1 00:13:02.823 } 00:13:02.823 Got JSON-RPC error response 00:13:02.823 response: 00:13:02.823 { 00:13:02.823 "code": -32603, 00:13:02.823 "message": "Unable to find target foobar" 00:13:02.823 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:02.823 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:02.823 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15921 00:13:03.084 [2024-10-30 14:00:01.272452] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15921: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:03.084 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:03.084 { 00:13:03.084 "nqn": "nqn.2016-06.io.spdk:cnode15921", 00:13:03.084 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:03.084 "method": "nvmf_create_subsystem", 00:13:03.084 "req_id": 1 00:13:03.084 } 00:13:03.084 Got JSON-RPC error response 00:13:03.084 response: 00:13:03.084 { 00:13:03.084 "code": -32602, 00:13:03.084 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:03.084 }' 00:13:03.084 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:03.084 { 00:13:03.084 "nqn": "nqn.2016-06.io.spdk:cnode15921", 00:13:03.085 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:03.085 "method": "nvmf_create_subsystem", 00:13:03.085 "req_id": 1 00:13:03.085 } 00:13:03.085 Got JSON-RPC error response 00:13:03.085 response: 00:13:03.085 { 00:13:03.085 "code": -32602, 00:13:03.085 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:03.085 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:03.085 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:03.085 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode20229 00:13:03.346 [2024-10-30 14:00:01.485203] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20229: invalid model number 'SPDK_Controller' 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:03.346 { 00:13:03.346 "nqn": "nqn.2016-06.io.spdk:cnode20229", 00:13:03.346 "model_number": "SPDK_Controller\u001f", 00:13:03.346 "method": "nvmf_create_subsystem", 00:13:03.346 "req_id": 1 00:13:03.346 } 00:13:03.346 Got JSON-RPC error response 00:13:03.346 response: 00:13:03.346 { 00:13:03.346 "code": -32602, 00:13:03.346 "message": "Invalid MN SPDK_Controller\u001f" 00:13:03.346 }' 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:03.346 { 00:13:03.346 "nqn": "nqn.2016-06.io.spdk:cnode20229", 00:13:03.346 "model_number": "SPDK_Controller\u001f", 00:13:03.346 "method": "nvmf_create_subsystem", 00:13:03.346 "req_id": 1 00:13:03.346 } 00:13:03.346 Got JSON-RPC error response 00:13:03.346 response: 00:13:03.346 { 00:13:03.346 "code": -32602, 00:13:03.346 "message": "Invalid MN SPDK_Controller\u001f" 00:13:03.346 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.346 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.347 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ v == \- ]] 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'vEse\CzqLLlVvwTUm0J(q' 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'vEse\CzqLLlVvwTUm0J(q' nqn.2016-06.io.spdk:cnode8674 00:13:03.609 [2024-10-30 14:00:01.866715] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8674: invalid serial number 'vEse\CzqLLlVvwTUm0J(q' 00:13:03.609 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:03.609 { 00:13:03.609 "nqn": "nqn.2016-06.io.spdk:cnode8674", 00:13:03.609 "serial_number": "vEse\\CzqLLlVvwTUm0J(q", 00:13:03.609 "method": "nvmf_create_subsystem", 00:13:03.609 "req_id": 1 00:13:03.609 } 00:13:03.609 Got JSON-RPC error response 00:13:03.609 response: 00:13:03.609 { 00:13:03.609 "code": -32602, 00:13:03.609 "message": "Invalid SN vEse\\CzqLLlVvwTUm0J(q" 00:13:03.609 }' 00:13:03.610 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:03.610 { 00:13:03.610 "nqn": "nqn.2016-06.io.spdk:cnode8674", 00:13:03.610 "serial_number": "vEse\\CzqLLlVvwTUm0J(q", 00:13:03.610 "method": "nvmf_create_subsystem", 00:13:03.610 "req_id": 1 00:13:03.610 } 00:13:03.610 Got JSON-RPC error response 00:13:03.610 response: 00:13:03.610 { 00:13:03.610 "code": -32602, 00:13:03.610 "message": "Invalid SN vEse\\CzqLLlVvwTUm0J(q" 00:13:03.610 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:03.610 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:03.610 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:03.610 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:03.610 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.873 14:00:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:03.873 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.874 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.135 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:04.135 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:04.135 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:04.135 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.135 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.135 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:04.135 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:04.135 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:04.135 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.135 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.135 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:04.135 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:04.135 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:04.135 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ G == \- ]] 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'G<##GC3f0p>vs{#viL6NN)TB|S_Ty?VnA!1qO}`z*' 00:13:04.136 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'G<##GC3f0p>vs{#viL6NN)TB|S_Ty?VnA!1qO}`z*' nqn.2016-06.io.spdk:cnode16315 00:13:04.136 [2024-10-30 14:00:02.412812] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16315: invalid model number 'G<##GC3f0p>vs{#viL6NN)TB|S_Ty?VnA!1qO}`z*' 00:13:04.398 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:04.398 { 00:13:04.398 "nqn": "nqn.2016-06.io.spdk:cnode16315", 00:13:04.398 "model_number": "G<##GC3f0p>vs{#viL6NN)TB|S_Ty?VnA!1qO}`z*", 00:13:04.398 "method": "nvmf_create_subsystem", 00:13:04.398 "req_id": 1 00:13:04.398 } 00:13:04.398 Got JSON-RPC error response 00:13:04.398 response: 00:13:04.398 { 00:13:04.398 "code": -32602, 00:13:04.398 "message": "Invalid MN G<##GC3f0p>vs{#viL6NN)TB|S_Ty?VnA!1qO}`z*" 00:13:04.398 }' 00:13:04.398 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:04.398 { 00:13:04.398 "nqn": "nqn.2016-06.io.spdk:cnode16315", 00:13:04.398 "model_number": "G<##GC3f0p>vs{#viL6NN)TB|S_Ty?VnA!1qO}`z*", 00:13:04.398 "method": "nvmf_create_subsystem", 00:13:04.398 "req_id": 1 00:13:04.398 } 00:13:04.398 Got JSON-RPC error response 00:13:04.398 response: 00:13:04.398 { 00:13:04.398 "code": -32602, 00:13:04.398 "message": "Invalid MN G<##GC3f0p>vs{#viL6NN)TB|S_Ty?VnA!1qO}`z*" 00:13:04.398 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:04.398 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:04.398 [2024-10-30 14:00:02.613629] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.398 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:04.659 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:04.659 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:04.659 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:04.659 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:04.659 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:04.919 [2024-10-30 14:00:02.994857] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:04.919 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:04.919 { 00:13:04.919 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:04.919 "listen_address": { 00:13:04.919 "trtype": "tcp", 00:13:04.919 "traddr": "", 00:13:04.919 "trsvcid": "4421" 00:13:04.919 }, 00:13:04.919 "method": "nvmf_subsystem_remove_listener", 00:13:04.919 "req_id": 1 00:13:04.919 } 00:13:04.919 Got JSON-RPC error response 00:13:04.919 response: 00:13:04.919 { 00:13:04.919 "code": -32602, 00:13:04.919 "message": "Invalid parameters" 00:13:04.919 }' 00:13:04.919 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:04.919 { 00:13:04.919 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:04.919 "listen_address": { 00:13:04.919 "trtype": "tcp", 00:13:04.919 "traddr": "", 00:13:04.919 "trsvcid": "4421" 00:13:04.919 }, 00:13:04.919 "method": "nvmf_subsystem_remove_listener", 00:13:04.919 "req_id": 1 00:13:04.919 } 00:13:04.919 Got JSON-RPC error response 00:13:04.919 response: 00:13:04.919 { 00:13:04.920 "code": -32602, 00:13:04.920 "message": "Invalid parameters" 00:13:04.920 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:04.920 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1007 -i 0 00:13:04.920 [2024-10-30 14:00:03.179399] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1007: invalid cntlid range [0-65519] 00:13:04.920 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:04.920 { 00:13:04.920 "nqn": "nqn.2016-06.io.spdk:cnode1007", 00:13:04.920 "min_cntlid": 0, 00:13:04.920 "method": "nvmf_create_subsystem", 00:13:04.920 "req_id": 1 00:13:04.920 } 00:13:04.920 Got JSON-RPC error response 00:13:04.920 response: 00:13:04.920 { 00:13:04.920 "code": -32602, 00:13:04.920 "message": "Invalid cntlid range [0-65519]" 00:13:04.920 }' 00:13:04.920 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:04.920 { 00:13:04.920 "nqn": "nqn.2016-06.io.spdk:cnode1007", 00:13:04.920 "min_cntlid": 0, 00:13:04.920 "method": "nvmf_create_subsystem", 00:13:04.920 "req_id": 1 00:13:04.920 } 00:13:04.920 Got JSON-RPC error response 00:13:04.920 response: 00:13:04.920 { 00:13:04.920 "code": -32602, 00:13:04.920 "message": "Invalid cntlid range [0-65519]" 00:13:04.920 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:04.920 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31053 -i 65520 00:13:05.180 [2024-10-30 14:00:03.360013] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31053: invalid cntlid range [65520-65519] 00:13:05.180 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:05.180 { 00:13:05.180 "nqn": "nqn.2016-06.io.spdk:cnode31053", 00:13:05.180 "min_cntlid": 65520, 00:13:05.180 "method": "nvmf_create_subsystem", 00:13:05.180 "req_id": 1 00:13:05.180 } 00:13:05.180 Got JSON-RPC error response 00:13:05.180 response: 00:13:05.180 { 00:13:05.180 "code": -32602, 00:13:05.180 "message": "Invalid cntlid range [65520-65519]" 00:13:05.180 }' 00:13:05.180 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:05.180 { 00:13:05.180 "nqn": "nqn.2016-06.io.spdk:cnode31053", 00:13:05.180 "min_cntlid": 65520, 00:13:05.180 "method": "nvmf_create_subsystem", 00:13:05.180 "req_id": 1 00:13:05.180 } 00:13:05.180 Got JSON-RPC error response 00:13:05.180 response: 00:13:05.180 { 00:13:05.180 "code": -32602, 00:13:05.180 "message": "Invalid cntlid range [65520-65519]" 00:13:05.180 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:05.180 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27807 -I 0 00:13:05.442 [2024-10-30 14:00:03.540547] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27807: invalid cntlid range [1-0] 00:13:05.442 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:05.442 { 00:13:05.442 "nqn": "nqn.2016-06.io.spdk:cnode27807", 00:13:05.442 "max_cntlid": 0, 00:13:05.442 "method": "nvmf_create_subsystem", 00:13:05.442 "req_id": 1 00:13:05.442 } 00:13:05.442 Got JSON-RPC error response 00:13:05.442 response: 00:13:05.442 { 00:13:05.442 "code": -32602, 00:13:05.442 "message": "Invalid cntlid range [1-0]" 00:13:05.442 }' 00:13:05.442 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:05.442 { 00:13:05.442 "nqn": "nqn.2016-06.io.spdk:cnode27807", 00:13:05.442 "max_cntlid": 0, 00:13:05.442 "method": "nvmf_create_subsystem", 00:13:05.442 "req_id": 1 00:13:05.442 } 00:13:05.442 Got JSON-RPC error response 00:13:05.442 response: 00:13:05.442 { 00:13:05.442 "code": -32602, 00:13:05.442 "message": "Invalid cntlid range [1-0]" 00:13:05.442 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:05.442 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10238 -I 65520 00:13:05.442 [2024-10-30 14:00:03.721128] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10238: invalid cntlid range [1-65520] 00:13:05.703 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:05.703 { 00:13:05.703 "nqn": "nqn.2016-06.io.spdk:cnode10238", 00:13:05.703 "max_cntlid": 65520, 00:13:05.703 "method": "nvmf_create_subsystem", 00:13:05.703 "req_id": 1 00:13:05.703 } 00:13:05.703 Got JSON-RPC error response 00:13:05.703 response: 00:13:05.703 { 00:13:05.703 "code": -32602, 00:13:05.703 "message": "Invalid cntlid range [1-65520]" 00:13:05.703 }' 00:13:05.703 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:05.703 { 00:13:05.703 "nqn": "nqn.2016-06.io.spdk:cnode10238", 00:13:05.703 "max_cntlid": 65520, 00:13:05.703 "method": "nvmf_create_subsystem", 00:13:05.703 "req_id": 1 00:13:05.703 } 00:13:05.703 Got JSON-RPC error response 00:13:05.703 response: 00:13:05.703 { 00:13:05.703 "code": -32602, 00:13:05.703 "message": "Invalid cntlid range [1-65520]" 00:13:05.703 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:05.703 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17011 -i 6 -I 5 00:13:05.703 [2024-10-30 14:00:03.901710] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17011: invalid cntlid range [6-5] 00:13:05.704 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:05.704 { 00:13:05.704 "nqn": "nqn.2016-06.io.spdk:cnode17011", 00:13:05.704 "min_cntlid": 6, 00:13:05.704 "max_cntlid": 5, 00:13:05.704 "method": "nvmf_create_subsystem", 00:13:05.704 "req_id": 1 00:13:05.704 } 00:13:05.704 Got JSON-RPC error response 00:13:05.704 response: 00:13:05.704 { 00:13:05.704 "code": -32602, 00:13:05.704 "message": "Invalid cntlid range [6-5]" 00:13:05.704 }' 00:13:05.704 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:05.704 { 00:13:05.704 "nqn": "nqn.2016-06.io.spdk:cnode17011", 00:13:05.704 "min_cntlid": 6, 00:13:05.704 "max_cntlid": 5, 00:13:05.704 "method": "nvmf_create_subsystem", 00:13:05.704 "req_id": 1 00:13:05.704 } 00:13:05.704 Got JSON-RPC error response 00:13:05.704 response: 00:13:05.704 { 00:13:05.704 "code": -32602, 00:13:05.704 "message": "Invalid cntlid range [6-5]" 00:13:05.704 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:05.704 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:05.966 { 00:13:05.966 "name": "foobar", 00:13:05.966 "method": "nvmf_delete_target", 00:13:05.966 "req_id": 1 00:13:05.966 } 00:13:05.966 Got JSON-RPC error response 00:13:05.966 response: 00:13:05.966 { 00:13:05.966 "code": -32602, 00:13:05.966 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:05.966 }' 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:05.966 { 00:13:05.966 "name": "foobar", 00:13:05.966 "method": "nvmf_delete_target", 00:13:05.966 "req_id": 1 00:13:05.966 } 00:13:05.966 Got JSON-RPC error response 00:13:05.966 response: 00:13:05.966 { 00:13:05.966 "code": -32602, 00:13:05.966 "message": "The specified target doesn't exist, cannot delete it." 00:13:05.966 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:05.966 rmmod nvme_tcp 00:13:05.966 rmmod nvme_fabrics 00:13:05.966 rmmod nvme_keyring 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 948712 ']' 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 948712 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 948712 ']' 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 948712 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 948712 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 948712' 00:13:05.966 killing process with pid 948712 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 948712 00:13:05.966 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 948712 00:13:06.228 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:06.228 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:06.228 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:06.228 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:06.228 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:06.228 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:06.228 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:06.228 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:06.228 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:06.228 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.228 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:06.228 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.144 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:08.144 00:13:08.144 real 0m14.166s 00:13:08.144 user 0m21.003s 00:13:08.144 sys 0m6.752s 00:13:08.144 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.144 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:08.144 ************************************ 00:13:08.144 END TEST nvmf_invalid 00:13:08.144 ************************************ 00:13:08.144 14:00:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:08.144 14:00:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:08.144 14:00:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.144 14:00:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:08.406 ************************************ 00:13:08.406 START TEST nvmf_connect_stress 00:13:08.406 ************************************ 00:13:08.406 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:08.406 * Looking for test storage... 00:13:08.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.406 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:08.406 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:13:08.406 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:08.406 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:08.406 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:08.406 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:08.406 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:08.406 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:08.406 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:08.406 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:08.406 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:08.406 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:08.406 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:08.406 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:08.406 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:08.406 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:08.406 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:08.406 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:08.406 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:08.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.407 --rc genhtml_branch_coverage=1 00:13:08.407 --rc genhtml_function_coverage=1 00:13:08.407 --rc genhtml_legend=1 00:13:08.407 --rc geninfo_all_blocks=1 00:13:08.407 --rc geninfo_unexecuted_blocks=1 00:13:08.407 00:13:08.407 ' 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:08.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.407 --rc genhtml_branch_coverage=1 00:13:08.407 --rc genhtml_function_coverage=1 00:13:08.407 --rc genhtml_legend=1 00:13:08.407 --rc geninfo_all_blocks=1 00:13:08.407 --rc geninfo_unexecuted_blocks=1 00:13:08.407 00:13:08.407 ' 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:08.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.407 --rc genhtml_branch_coverage=1 00:13:08.407 --rc genhtml_function_coverage=1 00:13:08.407 --rc genhtml_legend=1 00:13:08.407 --rc geninfo_all_blocks=1 00:13:08.407 --rc geninfo_unexecuted_blocks=1 00:13:08.407 00:13:08.407 ' 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:08.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.407 --rc genhtml_branch_coverage=1 00:13:08.407 --rc genhtml_function_coverage=1 00:13:08.407 --rc genhtml_legend=1 00:13:08.407 --rc geninfo_all_blocks=1 00:13:08.407 --rc geninfo_unexecuted_blocks=1 00:13:08.407 00:13:08.407 ' 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:08.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:08.407 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:16.556 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:16.556 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:16.556 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:16.556 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:16.557 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.557 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:16.557 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:16.557 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.557 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:16.557 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:16.557 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:16.557 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:16.557 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:16.557 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:16.557 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:16.557 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.557 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:16.557 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:16.557 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:16.557 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:16.557 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:16.557 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:16.557 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:16.557 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.557 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:16.557 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:16.557 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:16.557 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:16.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:13:16.557 00:13:16.557 --- 10.0.0.2 ping statistics --- 00:13:16.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.557 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:16.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:13:16.557 00:13:16.557 --- 10.0.0.1 ping statistics --- 00:13:16.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.557 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=954462 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 954462 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 954462 ']' 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:16.557 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.557 [2024-10-30 14:00:14.301088] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:13:16.557 [2024-10-30 14:00:14.301187] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.557 [2024-10-30 14:00:14.401196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:16.557 [2024-10-30 14:00:14.452917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.557 [2024-10-30 14:00:14.452968] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.557 [2024-10-30 14:00:14.452984] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.557 [2024-10-30 14:00:14.452991] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.557 [2024-10-30 14:00:14.452997] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.557 [2024-10-30 14:00:14.454873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.557 [2024-10-30 14:00:14.455042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.557 [2024-10-30 14:00:14.455043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.129 [2024-10-30 14:00:15.182183] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.129 [2024-10-30 14:00:15.207896] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.129 NULL1 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=954813 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.129 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.130 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.390 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.390 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:17.391 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.391 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.391 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.964 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.964 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:17.964 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.964 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.964 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.225 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.225 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:18.225 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.225 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.225 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.487 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.487 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:18.487 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.487 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.487 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.748 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.748 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:18.748 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.748 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.748 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.009 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.009 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:19.009 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.009 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.009 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.581 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.581 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:19.581 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.581 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.581 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.841 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.841 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:19.841 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.841 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.841 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.103 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.103 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:20.103 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.103 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.103 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.364 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.364 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:20.364 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.364 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.364 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.626 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.626 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:20.626 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.626 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.626 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.197 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.197 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:21.197 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.197 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.197 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.458 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.458 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:21.458 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.458 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.458 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.719 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.719 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:21.719 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.719 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.719 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.981 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.981 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:21.981 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.981 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.981 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.553 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.553 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:22.553 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.553 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.553 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.815 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.815 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:22.815 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.815 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.815 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.077 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.077 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:23.077 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.077 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.077 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.338 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.338 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:23.338 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.338 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.338 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.600 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.600 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:23.600 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.600 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.600 14:00:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.173 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.173 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:24.173 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.173 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.173 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.435 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.435 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:24.435 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.435 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.435 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.776 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.776 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:24.776 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.776 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.776 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.090 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.090 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:25.090 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.090 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.090 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.383 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.383 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:25.383 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.383 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.383 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.671 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.671 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:25.671 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.671 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.671 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.959 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.959 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:25.959 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.959 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.959 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.252 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.252 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:26.252 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.252 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.252 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.532 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.532 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:26.533 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.533 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.533 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.105 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.105 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:27.105 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.105 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.105 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.105 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:27.365 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 954813 00:13:27.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (954813) - No such process 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 954813 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:27.366 rmmod nvme_tcp 00:13:27.366 rmmod nvme_fabrics 00:13:27.366 rmmod nvme_keyring 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 954462 ']' 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 954462 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 954462 ']' 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 954462 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 954462 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 954462' 00:13:27.366 killing process with pid 954462 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 954462 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 954462 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:27.366 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:27.626 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:27.626 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:27.626 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.626 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.626 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.541 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:29.541 00:13:29.541 real 0m21.286s 00:13:29.541 user 0m42.126s 00:13:29.541 sys 0m9.381s 00:13:29.541 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.541 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.541 ************************************ 00:13:29.541 END TEST nvmf_connect_stress 00:13:29.541 ************************************ 00:13:29.541 14:00:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:29.541 14:00:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:29.541 14:00:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.541 14:00:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:29.541 ************************************ 00:13:29.541 START TEST nvmf_fused_ordering 00:13:29.541 ************************************ 00:13:29.541 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:29.803 * Looking for test storage... 00:13:29.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:29.803 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:29.803 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:13:29.803 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:29.803 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:29.803 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:29.803 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:29.803 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:29.803 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:29.803 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:29.803 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:29.803 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:29.803 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:29.803 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:29.803 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:29.803 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:29.803 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:29.803 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:29.803 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:29.803 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:29.803 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:29.803 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:29.803 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:29.803 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:29.803 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:29.803 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:29.803 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:29.803 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:29.803 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:29.803 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:29.803 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:29.803 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:29.803 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:29.803 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:29.803 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:29.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.803 --rc genhtml_branch_coverage=1 00:13:29.803 --rc genhtml_function_coverage=1 00:13:29.803 --rc genhtml_legend=1 00:13:29.804 --rc geninfo_all_blocks=1 00:13:29.804 --rc geninfo_unexecuted_blocks=1 00:13:29.804 00:13:29.804 ' 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:29.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.804 --rc genhtml_branch_coverage=1 00:13:29.804 --rc genhtml_function_coverage=1 00:13:29.804 --rc genhtml_legend=1 00:13:29.804 --rc geninfo_all_blocks=1 00:13:29.804 --rc geninfo_unexecuted_blocks=1 00:13:29.804 00:13:29.804 ' 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:29.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.804 --rc genhtml_branch_coverage=1 00:13:29.804 --rc genhtml_function_coverage=1 00:13:29.804 --rc genhtml_legend=1 00:13:29.804 --rc geninfo_all_blocks=1 00:13:29.804 --rc geninfo_unexecuted_blocks=1 00:13:29.804 00:13:29.804 ' 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:29.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.804 --rc genhtml_branch_coverage=1 00:13:29.804 --rc genhtml_function_coverage=1 00:13:29.804 --rc genhtml_legend=1 00:13:29.804 --rc geninfo_all_blocks=1 00:13:29.804 --rc geninfo_unexecuted_blocks=1 00:13:29.804 00:13:29.804 ' 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:29.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:29.804 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:37.948 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:37.948 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:37.948 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:37.948 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:37.948 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:37.948 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:37.948 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:37.948 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:37.948 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:37.948 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:37.948 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:37.948 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:37.948 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:37.948 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:37.949 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:37.949 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:37.949 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:37.949 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:37.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:13:37.949 00:13:37.949 --- 10.0.0.2 ping statistics --- 00:13:37.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.949 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:37.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:13:37.949 00:13:37.949 --- 10.0.0.1 ping statistics --- 00:13:37.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.949 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=960893 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 960893 00:13:37.949 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:37.950 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 960893 ']' 00:13:37.950 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.950 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.950 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.950 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.950 14:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:37.950 [2024-10-30 14:00:35.605995] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:13:37.950 [2024-10-30 14:00:35.606064] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.950 [2024-10-30 14:00:35.705972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.950 [2024-10-30 14:00:35.757129] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.950 [2024-10-30 14:00:35.757179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.950 [2024-10-30 14:00:35.757188] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.950 [2024-10-30 14:00:35.757195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.950 [2024-10-30 14:00:35.757203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.950 [2024-10-30 14:00:35.758002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:38.212 [2024-10-30 14:00:36.464661] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:38.212 [2024-10-30 14:00:36.488935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:38.212 NULL1 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.212 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:38.474 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.474 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:38.474 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.474 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:38.474 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.474 14:00:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:38.474 [2024-10-30 14:00:36.558483] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:13:38.474 [2024-10-30 14:00:36.558548] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid961209 ] 00:13:39.046 Attached to nqn.2016-06.io.spdk:cnode1 00:13:39.046 Namespace ID: 1 size: 1GB 00:13:39.046 fused_ordering(0) 00:13:39.046 fused_ordering(1) 00:13:39.046 fused_ordering(2) 00:13:39.046 fused_ordering(3) 00:13:39.046 fused_ordering(4) 00:13:39.046 fused_ordering(5) 00:13:39.046 fused_ordering(6) 00:13:39.046 fused_ordering(7) 00:13:39.046 fused_ordering(8) 00:13:39.046 fused_ordering(9) 00:13:39.046 fused_ordering(10) 00:13:39.046 fused_ordering(11) 00:13:39.046 fused_ordering(12) 00:13:39.046 fused_ordering(13) 00:13:39.046 fused_ordering(14) 00:13:39.046 fused_ordering(15) 00:13:39.046 fused_ordering(16) 00:13:39.046 fused_ordering(17) 00:13:39.046 fused_ordering(18) 00:13:39.046 fused_ordering(19) 00:13:39.046 fused_ordering(20) 00:13:39.046 fused_ordering(21) 00:13:39.046 fused_ordering(22) 00:13:39.046 fused_ordering(23) 00:13:39.046 fused_ordering(24) 00:13:39.046 fused_ordering(25) 00:13:39.046 fused_ordering(26) 00:13:39.046 fused_ordering(27) 00:13:39.046 fused_ordering(28) 00:13:39.046 fused_ordering(29) 00:13:39.046 fused_ordering(30) 00:13:39.046 fused_ordering(31) 00:13:39.046 fused_ordering(32) 00:13:39.046 fused_ordering(33) 00:13:39.046 fused_ordering(34) 00:13:39.046 fused_ordering(35) 00:13:39.046 fused_ordering(36) 00:13:39.046 fused_ordering(37) 00:13:39.046 fused_ordering(38) 00:13:39.046 fused_ordering(39) 00:13:39.046 fused_ordering(40) 00:13:39.046 fused_ordering(41) 00:13:39.046 fused_ordering(42) 00:13:39.046 fused_ordering(43) 00:13:39.046 fused_ordering(44) 00:13:39.046 fused_ordering(45) 00:13:39.046 fused_ordering(46) 00:13:39.046 fused_ordering(47) 00:13:39.046 fused_ordering(48) 00:13:39.046 fused_ordering(49) 00:13:39.046 fused_ordering(50) 00:13:39.046 fused_ordering(51) 00:13:39.046 fused_ordering(52) 00:13:39.046 fused_ordering(53) 00:13:39.046 fused_ordering(54) 00:13:39.046 fused_ordering(55) 00:13:39.046 fused_ordering(56) 00:13:39.046 fused_ordering(57) 00:13:39.046 fused_ordering(58) 00:13:39.046 fused_ordering(59) 00:13:39.046 fused_ordering(60) 00:13:39.046 fused_ordering(61) 00:13:39.046 fused_ordering(62) 00:13:39.046 fused_ordering(63) 00:13:39.046 fused_ordering(64) 00:13:39.046 fused_ordering(65) 00:13:39.046 fused_ordering(66) 00:13:39.046 fused_ordering(67) 00:13:39.046 fused_ordering(68) 00:13:39.046 fused_ordering(69) 00:13:39.046 fused_ordering(70) 00:13:39.046 fused_ordering(71) 00:13:39.046 fused_ordering(72) 00:13:39.046 fused_ordering(73) 00:13:39.046 fused_ordering(74) 00:13:39.046 fused_ordering(75) 00:13:39.046 fused_ordering(76) 00:13:39.046 fused_ordering(77) 00:13:39.046 fused_ordering(78) 00:13:39.046 fused_ordering(79) 00:13:39.046 fused_ordering(80) 00:13:39.046 fused_ordering(81) 00:13:39.046 fused_ordering(82) 00:13:39.046 fused_ordering(83) 00:13:39.046 fused_ordering(84) 00:13:39.046 fused_ordering(85) 00:13:39.046 fused_ordering(86) 00:13:39.046 fused_ordering(87) 00:13:39.046 fused_ordering(88) 00:13:39.046 fused_ordering(89) 00:13:39.046 fused_ordering(90) 00:13:39.046 fused_ordering(91) 00:13:39.046 fused_ordering(92) 00:13:39.046 fused_ordering(93) 00:13:39.046 fused_ordering(94) 00:13:39.046 fused_ordering(95) 00:13:39.046 fused_ordering(96) 00:13:39.046 fused_ordering(97) 00:13:39.046 fused_ordering(98) 00:13:39.046 fused_ordering(99) 00:13:39.046 fused_ordering(100) 00:13:39.046 fused_ordering(101) 00:13:39.046 fused_ordering(102) 00:13:39.046 fused_ordering(103) 00:13:39.046 fused_ordering(104) 00:13:39.046 fused_ordering(105) 00:13:39.046 fused_ordering(106) 00:13:39.046 fused_ordering(107) 00:13:39.046 fused_ordering(108) 00:13:39.046 fused_ordering(109) 00:13:39.046 fused_ordering(110) 00:13:39.046 fused_ordering(111) 00:13:39.046 fused_ordering(112) 00:13:39.046 fused_ordering(113) 00:13:39.046 fused_ordering(114) 00:13:39.046 fused_ordering(115) 00:13:39.046 fused_ordering(116) 00:13:39.047 fused_ordering(117) 00:13:39.047 fused_ordering(118) 00:13:39.047 fused_ordering(119) 00:13:39.047 fused_ordering(120) 00:13:39.047 fused_ordering(121) 00:13:39.047 fused_ordering(122) 00:13:39.047 fused_ordering(123) 00:13:39.047 fused_ordering(124) 00:13:39.047 fused_ordering(125) 00:13:39.047 fused_ordering(126) 00:13:39.047 fused_ordering(127) 00:13:39.047 fused_ordering(128) 00:13:39.047 fused_ordering(129) 00:13:39.047 fused_ordering(130) 00:13:39.047 fused_ordering(131) 00:13:39.047 fused_ordering(132) 00:13:39.047 fused_ordering(133) 00:13:39.047 fused_ordering(134) 00:13:39.047 fused_ordering(135) 00:13:39.047 fused_ordering(136) 00:13:39.047 fused_ordering(137) 00:13:39.047 fused_ordering(138) 00:13:39.047 fused_ordering(139) 00:13:39.047 fused_ordering(140) 00:13:39.047 fused_ordering(141) 00:13:39.047 fused_ordering(142) 00:13:39.047 fused_ordering(143) 00:13:39.047 fused_ordering(144) 00:13:39.047 fused_ordering(145) 00:13:39.047 fused_ordering(146) 00:13:39.047 fused_ordering(147) 00:13:39.047 fused_ordering(148) 00:13:39.047 fused_ordering(149) 00:13:39.047 fused_ordering(150) 00:13:39.047 fused_ordering(151) 00:13:39.047 fused_ordering(152) 00:13:39.047 fused_ordering(153) 00:13:39.047 fused_ordering(154) 00:13:39.047 fused_ordering(155) 00:13:39.047 fused_ordering(156) 00:13:39.047 fused_ordering(157) 00:13:39.047 fused_ordering(158) 00:13:39.047 fused_ordering(159) 00:13:39.047 fused_ordering(160) 00:13:39.047 fused_ordering(161) 00:13:39.047 fused_ordering(162) 00:13:39.047 fused_ordering(163) 00:13:39.047 fused_ordering(164) 00:13:39.047 fused_ordering(165) 00:13:39.047 fused_ordering(166) 00:13:39.047 fused_ordering(167) 00:13:39.047 fused_ordering(168) 00:13:39.047 fused_ordering(169) 00:13:39.047 fused_ordering(170) 00:13:39.047 fused_ordering(171) 00:13:39.047 fused_ordering(172) 00:13:39.047 fused_ordering(173) 00:13:39.047 fused_ordering(174) 00:13:39.047 fused_ordering(175) 00:13:39.047 fused_ordering(176) 00:13:39.047 fused_ordering(177) 00:13:39.047 fused_ordering(178) 00:13:39.047 fused_ordering(179) 00:13:39.047 fused_ordering(180) 00:13:39.047 fused_ordering(181) 00:13:39.047 fused_ordering(182) 00:13:39.047 fused_ordering(183) 00:13:39.047 fused_ordering(184) 00:13:39.047 fused_ordering(185) 00:13:39.047 fused_ordering(186) 00:13:39.047 fused_ordering(187) 00:13:39.047 fused_ordering(188) 00:13:39.047 fused_ordering(189) 00:13:39.047 fused_ordering(190) 00:13:39.047 fused_ordering(191) 00:13:39.047 fused_ordering(192) 00:13:39.047 fused_ordering(193) 00:13:39.047 fused_ordering(194) 00:13:39.047 fused_ordering(195) 00:13:39.047 fused_ordering(196) 00:13:39.047 fused_ordering(197) 00:13:39.047 fused_ordering(198) 00:13:39.047 fused_ordering(199) 00:13:39.047 fused_ordering(200) 00:13:39.047 fused_ordering(201) 00:13:39.047 fused_ordering(202) 00:13:39.047 fused_ordering(203) 00:13:39.047 fused_ordering(204) 00:13:39.047 fused_ordering(205) 00:13:39.308 fused_ordering(206) 00:13:39.308 fused_ordering(207) 00:13:39.308 fused_ordering(208) 00:13:39.308 fused_ordering(209) 00:13:39.308 fused_ordering(210) 00:13:39.308 fused_ordering(211) 00:13:39.308 fused_ordering(212) 00:13:39.308 fused_ordering(213) 00:13:39.308 fused_ordering(214) 00:13:39.308 fused_ordering(215) 00:13:39.308 fused_ordering(216) 00:13:39.308 fused_ordering(217) 00:13:39.308 fused_ordering(218) 00:13:39.308 fused_ordering(219) 00:13:39.308 fused_ordering(220) 00:13:39.308 fused_ordering(221) 00:13:39.308 fused_ordering(222) 00:13:39.308 fused_ordering(223) 00:13:39.308 fused_ordering(224) 00:13:39.308 fused_ordering(225) 00:13:39.308 fused_ordering(226) 00:13:39.308 fused_ordering(227) 00:13:39.308 fused_ordering(228) 00:13:39.308 fused_ordering(229) 00:13:39.308 fused_ordering(230) 00:13:39.308 fused_ordering(231) 00:13:39.308 fused_ordering(232) 00:13:39.308 fused_ordering(233) 00:13:39.308 fused_ordering(234) 00:13:39.308 fused_ordering(235) 00:13:39.308 fused_ordering(236) 00:13:39.308 fused_ordering(237) 00:13:39.308 fused_ordering(238) 00:13:39.308 fused_ordering(239) 00:13:39.308 fused_ordering(240) 00:13:39.308 fused_ordering(241) 00:13:39.308 fused_ordering(242) 00:13:39.308 fused_ordering(243) 00:13:39.308 fused_ordering(244) 00:13:39.308 fused_ordering(245) 00:13:39.308 fused_ordering(246) 00:13:39.308 fused_ordering(247) 00:13:39.308 fused_ordering(248) 00:13:39.308 fused_ordering(249) 00:13:39.308 fused_ordering(250) 00:13:39.308 fused_ordering(251) 00:13:39.308 fused_ordering(252) 00:13:39.308 fused_ordering(253) 00:13:39.308 fused_ordering(254) 00:13:39.308 fused_ordering(255) 00:13:39.308 fused_ordering(256) 00:13:39.308 fused_ordering(257) 00:13:39.308 fused_ordering(258) 00:13:39.308 fused_ordering(259) 00:13:39.308 fused_ordering(260) 00:13:39.308 fused_ordering(261) 00:13:39.308 fused_ordering(262) 00:13:39.308 fused_ordering(263) 00:13:39.308 fused_ordering(264) 00:13:39.308 fused_ordering(265) 00:13:39.308 fused_ordering(266) 00:13:39.308 fused_ordering(267) 00:13:39.308 fused_ordering(268) 00:13:39.308 fused_ordering(269) 00:13:39.308 fused_ordering(270) 00:13:39.308 fused_ordering(271) 00:13:39.309 fused_ordering(272) 00:13:39.309 fused_ordering(273) 00:13:39.309 fused_ordering(274) 00:13:39.309 fused_ordering(275) 00:13:39.309 fused_ordering(276) 00:13:39.309 fused_ordering(277) 00:13:39.309 fused_ordering(278) 00:13:39.309 fused_ordering(279) 00:13:39.309 fused_ordering(280) 00:13:39.309 fused_ordering(281) 00:13:39.309 fused_ordering(282) 00:13:39.309 fused_ordering(283) 00:13:39.309 fused_ordering(284) 00:13:39.309 fused_ordering(285) 00:13:39.309 fused_ordering(286) 00:13:39.309 fused_ordering(287) 00:13:39.309 fused_ordering(288) 00:13:39.309 fused_ordering(289) 00:13:39.309 fused_ordering(290) 00:13:39.309 fused_ordering(291) 00:13:39.309 fused_ordering(292) 00:13:39.309 fused_ordering(293) 00:13:39.309 fused_ordering(294) 00:13:39.309 fused_ordering(295) 00:13:39.309 fused_ordering(296) 00:13:39.309 fused_ordering(297) 00:13:39.309 fused_ordering(298) 00:13:39.309 fused_ordering(299) 00:13:39.309 fused_ordering(300) 00:13:39.309 fused_ordering(301) 00:13:39.309 fused_ordering(302) 00:13:39.309 fused_ordering(303) 00:13:39.309 fused_ordering(304) 00:13:39.309 fused_ordering(305) 00:13:39.309 fused_ordering(306) 00:13:39.309 fused_ordering(307) 00:13:39.309 fused_ordering(308) 00:13:39.309 fused_ordering(309) 00:13:39.309 fused_ordering(310) 00:13:39.309 fused_ordering(311) 00:13:39.309 fused_ordering(312) 00:13:39.309 fused_ordering(313) 00:13:39.309 fused_ordering(314) 00:13:39.309 fused_ordering(315) 00:13:39.309 fused_ordering(316) 00:13:39.309 fused_ordering(317) 00:13:39.309 fused_ordering(318) 00:13:39.309 fused_ordering(319) 00:13:39.309 fused_ordering(320) 00:13:39.309 fused_ordering(321) 00:13:39.309 fused_ordering(322) 00:13:39.309 fused_ordering(323) 00:13:39.309 fused_ordering(324) 00:13:39.309 fused_ordering(325) 00:13:39.309 fused_ordering(326) 00:13:39.309 fused_ordering(327) 00:13:39.309 fused_ordering(328) 00:13:39.309 fused_ordering(329) 00:13:39.309 fused_ordering(330) 00:13:39.309 fused_ordering(331) 00:13:39.309 fused_ordering(332) 00:13:39.309 fused_ordering(333) 00:13:39.309 fused_ordering(334) 00:13:39.309 fused_ordering(335) 00:13:39.309 fused_ordering(336) 00:13:39.309 fused_ordering(337) 00:13:39.309 fused_ordering(338) 00:13:39.309 fused_ordering(339) 00:13:39.309 fused_ordering(340) 00:13:39.309 fused_ordering(341) 00:13:39.309 fused_ordering(342) 00:13:39.309 fused_ordering(343) 00:13:39.309 fused_ordering(344) 00:13:39.309 fused_ordering(345) 00:13:39.309 fused_ordering(346) 00:13:39.309 fused_ordering(347) 00:13:39.309 fused_ordering(348) 00:13:39.309 fused_ordering(349) 00:13:39.309 fused_ordering(350) 00:13:39.309 fused_ordering(351) 00:13:39.309 fused_ordering(352) 00:13:39.309 fused_ordering(353) 00:13:39.309 fused_ordering(354) 00:13:39.309 fused_ordering(355) 00:13:39.309 fused_ordering(356) 00:13:39.309 fused_ordering(357) 00:13:39.309 fused_ordering(358) 00:13:39.309 fused_ordering(359) 00:13:39.309 fused_ordering(360) 00:13:39.309 fused_ordering(361) 00:13:39.309 fused_ordering(362) 00:13:39.309 fused_ordering(363) 00:13:39.309 fused_ordering(364) 00:13:39.309 fused_ordering(365) 00:13:39.309 fused_ordering(366) 00:13:39.309 fused_ordering(367) 00:13:39.309 fused_ordering(368) 00:13:39.309 fused_ordering(369) 00:13:39.309 fused_ordering(370) 00:13:39.309 fused_ordering(371) 00:13:39.309 fused_ordering(372) 00:13:39.309 fused_ordering(373) 00:13:39.309 fused_ordering(374) 00:13:39.309 fused_ordering(375) 00:13:39.309 fused_ordering(376) 00:13:39.309 fused_ordering(377) 00:13:39.309 fused_ordering(378) 00:13:39.309 fused_ordering(379) 00:13:39.309 fused_ordering(380) 00:13:39.309 fused_ordering(381) 00:13:39.309 fused_ordering(382) 00:13:39.309 fused_ordering(383) 00:13:39.309 fused_ordering(384) 00:13:39.309 fused_ordering(385) 00:13:39.309 fused_ordering(386) 00:13:39.309 fused_ordering(387) 00:13:39.309 fused_ordering(388) 00:13:39.309 fused_ordering(389) 00:13:39.309 fused_ordering(390) 00:13:39.309 fused_ordering(391) 00:13:39.309 fused_ordering(392) 00:13:39.309 fused_ordering(393) 00:13:39.309 fused_ordering(394) 00:13:39.309 fused_ordering(395) 00:13:39.309 fused_ordering(396) 00:13:39.309 fused_ordering(397) 00:13:39.309 fused_ordering(398) 00:13:39.309 fused_ordering(399) 00:13:39.309 fused_ordering(400) 00:13:39.309 fused_ordering(401) 00:13:39.309 fused_ordering(402) 00:13:39.309 fused_ordering(403) 00:13:39.309 fused_ordering(404) 00:13:39.309 fused_ordering(405) 00:13:39.309 fused_ordering(406) 00:13:39.309 fused_ordering(407) 00:13:39.309 fused_ordering(408) 00:13:39.309 fused_ordering(409) 00:13:39.309 fused_ordering(410) 00:13:39.881 fused_ordering(411) 00:13:39.881 fused_ordering(412) 00:13:39.881 fused_ordering(413) 00:13:39.881 fused_ordering(414) 00:13:39.881 fused_ordering(415) 00:13:39.881 fused_ordering(416) 00:13:39.881 fused_ordering(417) 00:13:39.881 fused_ordering(418) 00:13:39.881 fused_ordering(419) 00:13:39.881 fused_ordering(420) 00:13:39.881 fused_ordering(421) 00:13:39.881 fused_ordering(422) 00:13:39.881 fused_ordering(423) 00:13:39.881 fused_ordering(424) 00:13:39.881 fused_ordering(425) 00:13:39.881 fused_ordering(426) 00:13:39.881 fused_ordering(427) 00:13:39.881 fused_ordering(428) 00:13:39.881 fused_ordering(429) 00:13:39.881 fused_ordering(430) 00:13:39.881 fused_ordering(431) 00:13:39.881 fused_ordering(432) 00:13:39.881 fused_ordering(433) 00:13:39.881 fused_ordering(434) 00:13:39.881 fused_ordering(435) 00:13:39.881 fused_ordering(436) 00:13:39.881 fused_ordering(437) 00:13:39.881 fused_ordering(438) 00:13:39.881 fused_ordering(439) 00:13:39.881 fused_ordering(440) 00:13:39.881 fused_ordering(441) 00:13:39.881 fused_ordering(442) 00:13:39.881 fused_ordering(443) 00:13:39.881 fused_ordering(444) 00:13:39.881 fused_ordering(445) 00:13:39.881 fused_ordering(446) 00:13:39.881 fused_ordering(447) 00:13:39.881 fused_ordering(448) 00:13:39.881 fused_ordering(449) 00:13:39.881 fused_ordering(450) 00:13:39.881 fused_ordering(451) 00:13:39.881 fused_ordering(452) 00:13:39.881 fused_ordering(453) 00:13:39.881 fused_ordering(454) 00:13:39.881 fused_ordering(455) 00:13:39.881 fused_ordering(456) 00:13:39.881 fused_ordering(457) 00:13:39.881 fused_ordering(458) 00:13:39.881 fused_ordering(459) 00:13:39.881 fused_ordering(460) 00:13:39.881 fused_ordering(461) 00:13:39.881 fused_ordering(462) 00:13:39.881 fused_ordering(463) 00:13:39.881 fused_ordering(464) 00:13:39.881 fused_ordering(465) 00:13:39.881 fused_ordering(466) 00:13:39.881 fused_ordering(467) 00:13:39.881 fused_ordering(468) 00:13:39.881 fused_ordering(469) 00:13:39.881 fused_ordering(470) 00:13:39.881 fused_ordering(471) 00:13:39.881 fused_ordering(472) 00:13:39.881 fused_ordering(473) 00:13:39.881 fused_ordering(474) 00:13:39.881 fused_ordering(475) 00:13:39.881 fused_ordering(476) 00:13:39.881 fused_ordering(477) 00:13:39.881 fused_ordering(478) 00:13:39.881 fused_ordering(479) 00:13:39.881 fused_ordering(480) 00:13:39.881 fused_ordering(481) 00:13:39.881 fused_ordering(482) 00:13:39.881 fused_ordering(483) 00:13:39.881 fused_ordering(484) 00:13:39.881 fused_ordering(485) 00:13:39.881 fused_ordering(486) 00:13:39.881 fused_ordering(487) 00:13:39.881 fused_ordering(488) 00:13:39.881 fused_ordering(489) 00:13:39.881 fused_ordering(490) 00:13:39.881 fused_ordering(491) 00:13:39.881 fused_ordering(492) 00:13:39.881 fused_ordering(493) 00:13:39.881 fused_ordering(494) 00:13:39.881 fused_ordering(495) 00:13:39.881 fused_ordering(496) 00:13:39.881 fused_ordering(497) 00:13:39.881 fused_ordering(498) 00:13:39.881 fused_ordering(499) 00:13:39.881 fused_ordering(500) 00:13:39.881 fused_ordering(501) 00:13:39.881 fused_ordering(502) 00:13:39.881 fused_ordering(503) 00:13:39.881 fused_ordering(504) 00:13:39.881 fused_ordering(505) 00:13:39.881 fused_ordering(506) 00:13:39.881 fused_ordering(507) 00:13:39.881 fused_ordering(508) 00:13:39.881 fused_ordering(509) 00:13:39.881 fused_ordering(510) 00:13:39.881 fused_ordering(511) 00:13:39.881 fused_ordering(512) 00:13:39.881 fused_ordering(513) 00:13:39.881 fused_ordering(514) 00:13:39.881 fused_ordering(515) 00:13:39.881 fused_ordering(516) 00:13:39.881 fused_ordering(517) 00:13:39.881 fused_ordering(518) 00:13:39.881 fused_ordering(519) 00:13:39.881 fused_ordering(520) 00:13:39.881 fused_ordering(521) 00:13:39.881 fused_ordering(522) 00:13:39.881 fused_ordering(523) 00:13:39.881 fused_ordering(524) 00:13:39.881 fused_ordering(525) 00:13:39.881 fused_ordering(526) 00:13:39.881 fused_ordering(527) 00:13:39.881 fused_ordering(528) 00:13:39.881 fused_ordering(529) 00:13:39.881 fused_ordering(530) 00:13:39.881 fused_ordering(531) 00:13:39.881 fused_ordering(532) 00:13:39.881 fused_ordering(533) 00:13:39.881 fused_ordering(534) 00:13:39.881 fused_ordering(535) 00:13:39.881 fused_ordering(536) 00:13:39.881 fused_ordering(537) 00:13:39.881 fused_ordering(538) 00:13:39.881 fused_ordering(539) 00:13:39.881 fused_ordering(540) 00:13:39.881 fused_ordering(541) 00:13:39.881 fused_ordering(542) 00:13:39.881 fused_ordering(543) 00:13:39.881 fused_ordering(544) 00:13:39.881 fused_ordering(545) 00:13:39.881 fused_ordering(546) 00:13:39.881 fused_ordering(547) 00:13:39.881 fused_ordering(548) 00:13:39.882 fused_ordering(549) 00:13:39.882 fused_ordering(550) 00:13:39.882 fused_ordering(551) 00:13:39.882 fused_ordering(552) 00:13:39.882 fused_ordering(553) 00:13:39.882 fused_ordering(554) 00:13:39.882 fused_ordering(555) 00:13:39.882 fused_ordering(556) 00:13:39.882 fused_ordering(557) 00:13:39.882 fused_ordering(558) 00:13:39.882 fused_ordering(559) 00:13:39.882 fused_ordering(560) 00:13:39.882 fused_ordering(561) 00:13:39.882 fused_ordering(562) 00:13:39.882 fused_ordering(563) 00:13:39.882 fused_ordering(564) 00:13:39.882 fused_ordering(565) 00:13:39.882 fused_ordering(566) 00:13:39.882 fused_ordering(567) 00:13:39.882 fused_ordering(568) 00:13:39.882 fused_ordering(569) 00:13:39.882 fused_ordering(570) 00:13:39.882 fused_ordering(571) 00:13:39.882 fused_ordering(572) 00:13:39.882 fused_ordering(573) 00:13:39.882 fused_ordering(574) 00:13:39.882 fused_ordering(575) 00:13:39.882 fused_ordering(576) 00:13:39.882 fused_ordering(577) 00:13:39.882 fused_ordering(578) 00:13:39.882 fused_ordering(579) 00:13:39.882 fused_ordering(580) 00:13:39.882 fused_ordering(581) 00:13:39.882 fused_ordering(582) 00:13:39.882 fused_ordering(583) 00:13:39.882 fused_ordering(584) 00:13:39.882 fused_ordering(585) 00:13:39.882 fused_ordering(586) 00:13:39.882 fused_ordering(587) 00:13:39.882 fused_ordering(588) 00:13:39.882 fused_ordering(589) 00:13:39.882 fused_ordering(590) 00:13:39.882 fused_ordering(591) 00:13:39.882 fused_ordering(592) 00:13:39.882 fused_ordering(593) 00:13:39.882 fused_ordering(594) 00:13:39.882 fused_ordering(595) 00:13:39.882 fused_ordering(596) 00:13:39.882 fused_ordering(597) 00:13:39.882 fused_ordering(598) 00:13:39.882 fused_ordering(599) 00:13:39.882 fused_ordering(600) 00:13:39.882 fused_ordering(601) 00:13:39.882 fused_ordering(602) 00:13:39.882 fused_ordering(603) 00:13:39.882 fused_ordering(604) 00:13:39.882 fused_ordering(605) 00:13:39.882 fused_ordering(606) 00:13:39.882 fused_ordering(607) 00:13:39.882 fused_ordering(608) 00:13:39.882 fused_ordering(609) 00:13:39.882 fused_ordering(610) 00:13:39.882 fused_ordering(611) 00:13:39.882 fused_ordering(612) 00:13:39.882 fused_ordering(613) 00:13:39.882 fused_ordering(614) 00:13:39.882 fused_ordering(615) 00:13:40.143 fused_ordering(616) 00:13:40.143 fused_ordering(617) 00:13:40.143 fused_ordering(618) 00:13:40.143 fused_ordering(619) 00:13:40.143 fused_ordering(620) 00:13:40.143 fused_ordering(621) 00:13:40.143 fused_ordering(622) 00:13:40.143 fused_ordering(623) 00:13:40.143 fused_ordering(624) 00:13:40.143 fused_ordering(625) 00:13:40.143 fused_ordering(626) 00:13:40.143 fused_ordering(627) 00:13:40.143 fused_ordering(628) 00:13:40.143 fused_ordering(629) 00:13:40.143 fused_ordering(630) 00:13:40.143 fused_ordering(631) 00:13:40.143 fused_ordering(632) 00:13:40.143 fused_ordering(633) 00:13:40.143 fused_ordering(634) 00:13:40.143 fused_ordering(635) 00:13:40.143 fused_ordering(636) 00:13:40.143 fused_ordering(637) 00:13:40.143 fused_ordering(638) 00:13:40.143 fused_ordering(639) 00:13:40.143 fused_ordering(640) 00:13:40.143 fused_ordering(641) 00:13:40.143 fused_ordering(642) 00:13:40.143 fused_ordering(643) 00:13:40.143 fused_ordering(644) 00:13:40.143 fused_ordering(645) 00:13:40.143 fused_ordering(646) 00:13:40.143 fused_ordering(647) 00:13:40.143 fused_ordering(648) 00:13:40.143 fused_ordering(649) 00:13:40.143 fused_ordering(650) 00:13:40.143 fused_ordering(651) 00:13:40.143 fused_ordering(652) 00:13:40.143 fused_ordering(653) 00:13:40.143 fused_ordering(654) 00:13:40.143 fused_ordering(655) 00:13:40.143 fused_ordering(656) 00:13:40.143 fused_ordering(657) 00:13:40.143 fused_ordering(658) 00:13:40.144 fused_ordering(659) 00:13:40.144 fused_ordering(660) 00:13:40.144 fused_ordering(661) 00:13:40.144 fused_ordering(662) 00:13:40.144 fused_ordering(663) 00:13:40.144 fused_ordering(664) 00:13:40.144 fused_ordering(665) 00:13:40.144 fused_ordering(666) 00:13:40.144 fused_ordering(667) 00:13:40.144 fused_ordering(668) 00:13:40.144 fused_ordering(669) 00:13:40.144 fused_ordering(670) 00:13:40.144 fused_ordering(671) 00:13:40.144 fused_ordering(672) 00:13:40.144 fused_ordering(673) 00:13:40.144 fused_ordering(674) 00:13:40.144 fused_ordering(675) 00:13:40.144 fused_ordering(676) 00:13:40.144 fused_ordering(677) 00:13:40.144 fused_ordering(678) 00:13:40.144 fused_ordering(679) 00:13:40.144 fused_ordering(680) 00:13:40.144 fused_ordering(681) 00:13:40.144 fused_ordering(682) 00:13:40.144 fused_ordering(683) 00:13:40.144 fused_ordering(684) 00:13:40.144 fused_ordering(685) 00:13:40.144 fused_ordering(686) 00:13:40.144 fused_ordering(687) 00:13:40.144 fused_ordering(688) 00:13:40.144 fused_ordering(689) 00:13:40.144 fused_ordering(690) 00:13:40.144 fused_ordering(691) 00:13:40.144 fused_ordering(692) 00:13:40.144 fused_ordering(693) 00:13:40.144 fused_ordering(694) 00:13:40.144 fused_ordering(695) 00:13:40.144 fused_ordering(696) 00:13:40.144 fused_ordering(697) 00:13:40.144 fused_ordering(698) 00:13:40.144 fused_ordering(699) 00:13:40.144 fused_ordering(700) 00:13:40.144 fused_ordering(701) 00:13:40.144 fused_ordering(702) 00:13:40.144 fused_ordering(703) 00:13:40.144 fused_ordering(704) 00:13:40.144 fused_ordering(705) 00:13:40.144 fused_ordering(706) 00:13:40.144 fused_ordering(707) 00:13:40.144 fused_ordering(708) 00:13:40.144 fused_ordering(709) 00:13:40.144 fused_ordering(710) 00:13:40.144 fused_ordering(711) 00:13:40.144 fused_ordering(712) 00:13:40.144 fused_ordering(713) 00:13:40.144 fused_ordering(714) 00:13:40.144 fused_ordering(715) 00:13:40.144 fused_ordering(716) 00:13:40.144 fused_ordering(717) 00:13:40.144 fused_ordering(718) 00:13:40.144 fused_ordering(719) 00:13:40.144 fused_ordering(720) 00:13:40.144 fused_ordering(721) 00:13:40.144 fused_ordering(722) 00:13:40.144 fused_ordering(723) 00:13:40.144 fused_ordering(724) 00:13:40.144 fused_ordering(725) 00:13:40.144 fused_ordering(726) 00:13:40.144 fused_ordering(727) 00:13:40.144 fused_ordering(728) 00:13:40.144 fused_ordering(729) 00:13:40.144 fused_ordering(730) 00:13:40.144 fused_ordering(731) 00:13:40.144 fused_ordering(732) 00:13:40.144 fused_ordering(733) 00:13:40.144 fused_ordering(734) 00:13:40.144 fused_ordering(735) 00:13:40.144 fused_ordering(736) 00:13:40.144 fused_ordering(737) 00:13:40.144 fused_ordering(738) 00:13:40.144 fused_ordering(739) 00:13:40.144 fused_ordering(740) 00:13:40.144 fused_ordering(741) 00:13:40.144 fused_ordering(742) 00:13:40.144 fused_ordering(743) 00:13:40.144 fused_ordering(744) 00:13:40.144 fused_ordering(745) 00:13:40.144 fused_ordering(746) 00:13:40.144 fused_ordering(747) 00:13:40.144 fused_ordering(748) 00:13:40.144 fused_ordering(749) 00:13:40.144 fused_ordering(750) 00:13:40.144 fused_ordering(751) 00:13:40.144 fused_ordering(752) 00:13:40.144 fused_ordering(753) 00:13:40.144 fused_ordering(754) 00:13:40.144 fused_ordering(755) 00:13:40.144 fused_ordering(756) 00:13:40.144 fused_ordering(757) 00:13:40.144 fused_ordering(758) 00:13:40.144 fused_ordering(759) 00:13:40.144 fused_ordering(760) 00:13:40.144 fused_ordering(761) 00:13:40.144 fused_ordering(762) 00:13:40.144 fused_ordering(763) 00:13:40.144 fused_ordering(764) 00:13:40.144 fused_ordering(765) 00:13:40.144 fused_ordering(766) 00:13:40.144 fused_ordering(767) 00:13:40.144 fused_ordering(768) 00:13:40.144 fused_ordering(769) 00:13:40.144 fused_ordering(770) 00:13:40.144 fused_ordering(771) 00:13:40.144 fused_ordering(772) 00:13:40.144 fused_ordering(773) 00:13:40.144 fused_ordering(774) 00:13:40.144 fused_ordering(775) 00:13:40.144 fused_ordering(776) 00:13:40.144 fused_ordering(777) 00:13:40.144 fused_ordering(778) 00:13:40.144 fused_ordering(779) 00:13:40.144 fused_ordering(780) 00:13:40.144 fused_ordering(781) 00:13:40.144 fused_ordering(782) 00:13:40.144 fused_ordering(783) 00:13:40.144 fused_ordering(784) 00:13:40.144 fused_ordering(785) 00:13:40.144 fused_ordering(786) 00:13:40.144 fused_ordering(787) 00:13:40.144 fused_ordering(788) 00:13:40.144 fused_ordering(789) 00:13:40.144 fused_ordering(790) 00:13:40.144 fused_ordering(791) 00:13:40.144 fused_ordering(792) 00:13:40.144 fused_ordering(793) 00:13:40.144 fused_ordering(794) 00:13:40.144 fused_ordering(795) 00:13:40.144 fused_ordering(796) 00:13:40.144 fused_ordering(797) 00:13:40.144 fused_ordering(798) 00:13:40.144 fused_ordering(799) 00:13:40.144 fused_ordering(800) 00:13:40.144 fused_ordering(801) 00:13:40.144 fused_ordering(802) 00:13:40.144 fused_ordering(803) 00:13:40.144 fused_ordering(804) 00:13:40.144 fused_ordering(805) 00:13:40.144 fused_ordering(806) 00:13:40.144 fused_ordering(807) 00:13:40.144 fused_ordering(808) 00:13:40.144 fused_ordering(809) 00:13:40.144 fused_ordering(810) 00:13:40.144 fused_ordering(811) 00:13:40.144 fused_ordering(812) 00:13:40.144 fused_ordering(813) 00:13:40.144 fused_ordering(814) 00:13:40.144 fused_ordering(815) 00:13:40.144 fused_ordering(816) 00:13:40.144 fused_ordering(817) 00:13:40.144 fused_ordering(818) 00:13:40.144 fused_ordering(819) 00:13:40.144 fused_ordering(820) 00:13:41.087 fused_ordering(821) 00:13:41.087 fused_ordering(822) 00:13:41.087 fused_ordering(823) 00:13:41.087 fused_ordering(824) 00:13:41.087 fused_ordering(825) 00:13:41.087 fused_ordering(826) 00:13:41.087 fused_ordering(827) 00:13:41.087 fused_ordering(828) 00:13:41.087 fused_ordering(829) 00:13:41.087 fused_ordering(830) 00:13:41.087 fused_ordering(831) 00:13:41.087 fused_ordering(832) 00:13:41.087 fused_ordering(833) 00:13:41.087 fused_ordering(834) 00:13:41.087 fused_ordering(835) 00:13:41.087 fused_ordering(836) 00:13:41.087 fused_ordering(837) 00:13:41.087 fused_ordering(838) 00:13:41.087 fused_ordering(839) 00:13:41.087 fused_ordering(840) 00:13:41.087 fused_ordering(841) 00:13:41.087 fused_ordering(842) 00:13:41.087 fused_ordering(843) 00:13:41.087 fused_ordering(844) 00:13:41.087 fused_ordering(845) 00:13:41.087 fused_ordering(846) 00:13:41.087 fused_ordering(847) 00:13:41.087 fused_ordering(848) 00:13:41.087 fused_ordering(849) 00:13:41.087 fused_ordering(850) 00:13:41.087 fused_ordering(851) 00:13:41.087 fused_ordering(852) 00:13:41.087 fused_ordering(853) 00:13:41.087 fused_ordering(854) 00:13:41.087 fused_ordering(855) 00:13:41.087 fused_ordering(856) 00:13:41.087 fused_ordering(857) 00:13:41.087 fused_ordering(858) 00:13:41.087 fused_ordering(859) 00:13:41.087 fused_ordering(860) 00:13:41.087 fused_ordering(861) 00:13:41.087 fused_ordering(862) 00:13:41.087 fused_ordering(863) 00:13:41.087 fused_ordering(864) 00:13:41.087 fused_ordering(865) 00:13:41.087 fused_ordering(866) 00:13:41.087 fused_ordering(867) 00:13:41.087 fused_ordering(868) 00:13:41.087 fused_ordering(869) 00:13:41.087 fused_ordering(870) 00:13:41.087 fused_ordering(871) 00:13:41.087 fused_ordering(872) 00:13:41.087 fused_ordering(873) 00:13:41.087 fused_ordering(874) 00:13:41.087 fused_ordering(875) 00:13:41.087 fused_ordering(876) 00:13:41.087 fused_ordering(877) 00:13:41.087 fused_ordering(878) 00:13:41.087 fused_ordering(879) 00:13:41.087 fused_ordering(880) 00:13:41.087 fused_ordering(881) 00:13:41.087 fused_ordering(882) 00:13:41.087 fused_ordering(883) 00:13:41.087 fused_ordering(884) 00:13:41.087 fused_ordering(885) 00:13:41.087 fused_ordering(886) 00:13:41.087 fused_ordering(887) 00:13:41.087 fused_ordering(888) 00:13:41.087 fused_ordering(889) 00:13:41.087 fused_ordering(890) 00:13:41.087 fused_ordering(891) 00:13:41.087 fused_ordering(892) 00:13:41.087 fused_ordering(893) 00:13:41.088 fused_ordering(894) 00:13:41.088 fused_ordering(895) 00:13:41.088 fused_ordering(896) 00:13:41.088 fused_ordering(897) 00:13:41.088 fused_ordering(898) 00:13:41.088 fused_ordering(899) 00:13:41.088 fused_ordering(900) 00:13:41.088 fused_ordering(901) 00:13:41.088 fused_ordering(902) 00:13:41.088 fused_ordering(903) 00:13:41.088 fused_ordering(904) 00:13:41.088 fused_ordering(905) 00:13:41.088 fused_ordering(906) 00:13:41.088 fused_ordering(907) 00:13:41.088 fused_ordering(908) 00:13:41.088 fused_ordering(909) 00:13:41.088 fused_ordering(910) 00:13:41.088 fused_ordering(911) 00:13:41.088 fused_ordering(912) 00:13:41.088 fused_ordering(913) 00:13:41.088 fused_ordering(914) 00:13:41.088 fused_ordering(915) 00:13:41.088 fused_ordering(916) 00:13:41.088 fused_ordering(917) 00:13:41.088 fused_ordering(918) 00:13:41.088 fused_ordering(919) 00:13:41.088 fused_ordering(920) 00:13:41.088 fused_ordering(921) 00:13:41.088 fused_ordering(922) 00:13:41.088 fused_ordering(923) 00:13:41.088 fused_ordering(924) 00:13:41.088 fused_ordering(925) 00:13:41.088 fused_ordering(926) 00:13:41.088 fused_ordering(927) 00:13:41.088 fused_ordering(928) 00:13:41.088 fused_ordering(929) 00:13:41.088 fused_ordering(930) 00:13:41.088 fused_ordering(931) 00:13:41.088 fused_ordering(932) 00:13:41.088 fused_ordering(933) 00:13:41.088 fused_ordering(934) 00:13:41.088 fused_ordering(935) 00:13:41.088 fused_ordering(936) 00:13:41.088 fused_ordering(937) 00:13:41.088 fused_ordering(938) 00:13:41.088 fused_ordering(939) 00:13:41.088 fused_ordering(940) 00:13:41.088 fused_ordering(941) 00:13:41.088 fused_ordering(942) 00:13:41.088 fused_ordering(943) 00:13:41.088 fused_ordering(944) 00:13:41.088 fused_ordering(945) 00:13:41.088 fused_ordering(946) 00:13:41.088 fused_ordering(947) 00:13:41.088 fused_ordering(948) 00:13:41.088 fused_ordering(949) 00:13:41.088 fused_ordering(950) 00:13:41.088 fused_ordering(951) 00:13:41.088 fused_ordering(952) 00:13:41.088 fused_ordering(953) 00:13:41.088 fused_ordering(954) 00:13:41.088 fused_ordering(955) 00:13:41.088 fused_ordering(956) 00:13:41.088 fused_ordering(957) 00:13:41.088 fused_ordering(958) 00:13:41.088 fused_ordering(959) 00:13:41.088 fused_ordering(960) 00:13:41.088 fused_ordering(961) 00:13:41.088 fused_ordering(962) 00:13:41.088 fused_ordering(963) 00:13:41.088 fused_ordering(964) 00:13:41.088 fused_ordering(965) 00:13:41.088 fused_ordering(966) 00:13:41.088 fused_ordering(967) 00:13:41.088 fused_ordering(968) 00:13:41.088 fused_ordering(969) 00:13:41.088 fused_ordering(970) 00:13:41.088 fused_ordering(971) 00:13:41.088 fused_ordering(972) 00:13:41.088 fused_ordering(973) 00:13:41.088 fused_ordering(974) 00:13:41.088 fused_ordering(975) 00:13:41.088 fused_ordering(976) 00:13:41.088 fused_ordering(977) 00:13:41.088 fused_ordering(978) 00:13:41.088 fused_ordering(979) 00:13:41.088 fused_ordering(980) 00:13:41.088 fused_ordering(981) 00:13:41.088 fused_ordering(982) 00:13:41.088 fused_ordering(983) 00:13:41.088 fused_ordering(984) 00:13:41.088 fused_ordering(985) 00:13:41.088 fused_ordering(986) 00:13:41.088 fused_ordering(987) 00:13:41.088 fused_ordering(988) 00:13:41.088 fused_ordering(989) 00:13:41.088 fused_ordering(990) 00:13:41.088 fused_ordering(991) 00:13:41.088 fused_ordering(992) 00:13:41.088 fused_ordering(993) 00:13:41.088 fused_ordering(994) 00:13:41.088 fused_ordering(995) 00:13:41.088 fused_ordering(996) 00:13:41.088 fused_ordering(997) 00:13:41.088 fused_ordering(998) 00:13:41.088 fused_ordering(999) 00:13:41.088 fused_ordering(1000) 00:13:41.088 fused_ordering(1001) 00:13:41.088 fused_ordering(1002) 00:13:41.088 fused_ordering(1003) 00:13:41.088 fused_ordering(1004) 00:13:41.088 fused_ordering(1005) 00:13:41.088 fused_ordering(1006) 00:13:41.088 fused_ordering(1007) 00:13:41.088 fused_ordering(1008) 00:13:41.088 fused_ordering(1009) 00:13:41.088 fused_ordering(1010) 00:13:41.088 fused_ordering(1011) 00:13:41.088 fused_ordering(1012) 00:13:41.088 fused_ordering(1013) 00:13:41.088 fused_ordering(1014) 00:13:41.088 fused_ordering(1015) 00:13:41.088 fused_ordering(1016) 00:13:41.088 fused_ordering(1017) 00:13:41.088 fused_ordering(1018) 00:13:41.088 fused_ordering(1019) 00:13:41.088 fused_ordering(1020) 00:13:41.088 fused_ordering(1021) 00:13:41.088 fused_ordering(1022) 00:13:41.088 fused_ordering(1023) 00:13:41.088 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:41.088 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:41.088 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:41.088 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:41.088 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:41.088 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:41.088 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:41.088 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:41.088 rmmod nvme_tcp 00:13:41.088 rmmod nvme_fabrics 00:13:41.088 rmmod nvme_keyring 00:13:41.088 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:41.088 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:41.088 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:41.088 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 960893 ']' 00:13:41.088 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 960893 00:13:41.088 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 960893 ']' 00:13:41.088 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 960893 00:13:41.088 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:41.088 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:41.088 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 960893 00:13:41.088 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:41.088 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:41.088 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 960893' 00:13:41.088 killing process with pid 960893 00:13:41.088 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 960893 00:13:41.088 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 960893 00:13:41.349 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:41.349 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:41.349 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:41.349 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:41.349 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:41.349 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:41.349 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:41.349 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:41.349 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:41.349 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.349 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:41.349 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.262 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:43.262 00:13:43.262 real 0m13.664s 00:13:43.262 user 0m7.356s 00:13:43.262 sys 0m7.312s 00:13:43.262 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.262 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.262 ************************************ 00:13:43.262 END TEST nvmf_fused_ordering 00:13:43.262 ************************************ 00:13:43.262 14:00:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:43.262 14:00:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:43.262 14:00:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.262 14:00:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:43.525 ************************************ 00:13:43.525 START TEST nvmf_ns_masking 00:13:43.525 ************************************ 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:43.525 * Looking for test storage... 00:13:43.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:43.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.525 --rc genhtml_branch_coverage=1 00:13:43.525 --rc genhtml_function_coverage=1 00:13:43.525 --rc genhtml_legend=1 00:13:43.525 --rc geninfo_all_blocks=1 00:13:43.525 --rc geninfo_unexecuted_blocks=1 00:13:43.525 00:13:43.525 ' 00:13:43.525 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:43.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.525 --rc genhtml_branch_coverage=1 00:13:43.525 --rc genhtml_function_coverage=1 00:13:43.525 --rc genhtml_legend=1 00:13:43.525 --rc geninfo_all_blocks=1 00:13:43.525 --rc geninfo_unexecuted_blocks=1 00:13:43.526 00:13:43.526 ' 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:43.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.526 --rc genhtml_branch_coverage=1 00:13:43.526 --rc genhtml_function_coverage=1 00:13:43.526 --rc genhtml_legend=1 00:13:43.526 --rc geninfo_all_blocks=1 00:13:43.526 --rc geninfo_unexecuted_blocks=1 00:13:43.526 00:13:43.526 ' 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:43.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.526 --rc genhtml_branch_coverage=1 00:13:43.526 --rc genhtml_function_coverage=1 00:13:43.526 --rc genhtml_legend=1 00:13:43.526 --rc geninfo_all_blocks=1 00:13:43.526 --rc geninfo_unexecuted_blocks=1 00:13:43.526 00:13:43.526 ' 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:43.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=3e553672-f9bb-403c-b128-3604ee5a3847 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=fbea2fa8-0bf4-477a-9867-b8700123f0ce 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=7dff9a4c-5f63-47e9-8ec8-878df50da412 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:43.526 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.788 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:43.788 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:43.788 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:43.788 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.788 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:43.788 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.788 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:43.788 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:43.788 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:43.788 14:00:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:51.931 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:51.931 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:51.931 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:51.931 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:51.932 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:51.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:13:51.932 00:13:51.932 --- 10.0.0.2 ping statistics --- 00:13:51.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.932 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:51.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:13:51.932 00:13:51.932 --- 10.0.0.1 ping statistics --- 00:13:51.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.932 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=965902 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 965902 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 965902 ']' 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.932 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:51.932 [2024-10-30 14:00:49.429210] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:13:51.932 [2024-10-30 14:00:49.429282] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.932 [2024-10-30 14:00:49.527831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.932 [2024-10-30 14:00:49.581051] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.932 [2024-10-30 14:00:49.581102] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.932 [2024-10-30 14:00:49.581114] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:51.932 [2024-10-30 14:00:49.581124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:51.932 [2024-10-30 14:00:49.581132] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.932 [2024-10-30 14:00:49.581984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.194 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.194 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:52.194 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:52.194 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:52.194 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:52.194 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.194 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:52.194 [2024-10-30 14:00:50.448083] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.194 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:52.194 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:52.194 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:52.455 Malloc1 00:13:52.455 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:52.716 Malloc2 00:13:52.716 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:52.976 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:52.976 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.238 [2024-10-30 14:00:51.407469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.238 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:53.238 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7dff9a4c-5f63-47e9-8ec8-878df50da412 -a 10.0.0.2 -s 4420 -i 4 00:13:53.499 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:53.499 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:53.499 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:53.499 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:53.499 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:55.416 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:55.416 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:55.416 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:55.416 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:55.416 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:55.416 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:55.416 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:55.416 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:55.416 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:55.416 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:55.416 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:55.416 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:55.416 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:55.416 [ 0]:0x1 00:13:55.416 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:55.416 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:55.416 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=89d15ea1aed2419eb5a5e2f7aea1a89d 00:13:55.416 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 89d15ea1aed2419eb5a5e2f7aea1a89d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.416 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:55.677 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:55.678 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:55.678 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:55.678 [ 0]:0x1 00:13:55.678 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:55.678 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:55.678 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=89d15ea1aed2419eb5a5e2f7aea1a89d 00:13:55.678 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 89d15ea1aed2419eb5a5e2f7aea1a89d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.678 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:55.678 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:55.678 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:55.678 [ 1]:0x2 00:13:55.678 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:55.678 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:55.678 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=23a033880d454926a66de6399417455a 00:13:55.678 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 23a033880d454926a66de6399417455a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.678 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:55.678 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:55.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.939 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.199 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:56.460 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:56.460 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7dff9a4c-5f63-47e9-8ec8-878df50da412 -a 10.0.0.2 -s 4420 -i 4 00:13:56.460 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:56.460 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:56.460 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:56.460 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:56.460 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:56.460 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:58.376 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:58.637 [ 0]:0x2 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=23a033880d454926a66de6399417455a 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 23a033880d454926a66de6399417455a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.637 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:58.898 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:58.898 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:58.898 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.898 [ 0]:0x1 00:13:58.898 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:58.898 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.898 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=89d15ea1aed2419eb5a5e2f7aea1a89d 00:13:58.898 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 89d15ea1aed2419eb5a5e2f7aea1a89d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.898 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:59.159 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:59.159 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:59.159 [ 1]:0x2 00:13:59.159 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:59.159 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:59.159 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=23a033880d454926a66de6399417455a 00:13:59.159 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 23a033880d454926a66de6399417455a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:59.159 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:59.419 [ 0]:0x2 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=23a033880d454926a66de6399417455a 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 23a033880d454926a66de6399417455a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:59.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.419 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:59.678 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:59.678 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7dff9a4c-5f63-47e9-8ec8-878df50da412 -a 10.0.0.2 -s 4420 -i 4 00:13:59.938 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:59.938 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:59.938 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:59.938 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:59.938 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:59.938 14:00:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:01.846 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:01.846 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:01.846 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:01.847 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:01.847 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:01.847 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:01.847 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:01.847 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:01.847 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:01.847 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:01.847 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:01.847 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:01.847 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:01.847 [ 0]:0x1 00:14:01.847 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.107 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:02.107 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=89d15ea1aed2419eb5a5e2f7aea1a89d 00:14:02.107 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 89d15ea1aed2419eb5a5e2f7aea1a89d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.107 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:02.107 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.107 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:02.107 [ 1]:0x2 00:14:02.107 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:02.107 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.107 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=23a033880d454926a66de6399417455a 00:14:02.107 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 23a033880d454926a66de6399417455a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.107 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:02.368 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:02.368 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:02.368 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:02.368 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:02.368 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.368 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:02.368 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.368 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:02.369 [ 0]:0x2 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=23a033880d454926a66de6399417455a 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 23a033880d454926a66de6399417455a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:02.369 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:02.630 [2024-10-30 14:01:00.716830] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:02.630 request: 00:14:02.630 { 00:14:02.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.630 "nsid": 2, 00:14:02.630 "host": "nqn.2016-06.io.spdk:host1", 00:14:02.630 "method": "nvmf_ns_remove_host", 00:14:02.630 "req_id": 1 00:14:02.630 } 00:14:02.630 Got JSON-RPC error response 00:14:02.630 response: 00:14:02.630 { 00:14:02.630 "code": -32602, 00:14:02.630 "message": "Invalid parameters" 00:14:02.630 } 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:02.630 [ 0]:0x2 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=23a033880d454926a66de6399417455a 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 23a033880d454926a66de6399417455a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.630 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:02.631 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:02.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.892 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=968401 00:14:02.892 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.892 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:02.892 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 968401 /var/tmp/host.sock 00:14:02.892 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 968401 ']' 00:14:02.892 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:02.892 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.892 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:02.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:02.892 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.892 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:02.892 [2024-10-30 14:01:00.989851] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:14:02.892 [2024-10-30 14:01:00.989903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid968401 ] 00:14:02.892 [2024-10-30 14:01:01.077968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.892 [2024-10-30 14:01:01.113850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.834 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:03.834 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:03.834 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.834 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:03.834 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 3e553672-f9bb-403c-b128-3604ee5a3847 00:14:03.834 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:03.834 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3E553672F9BB403CB1283604EE5A3847 -i 00:14:04.094 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid fbea2fa8-0bf4-477a-9867-b8700123f0ce 00:14:04.095 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:04.095 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g FBEA2FA80BF4477A9867B8700123F0CE -i 00:14:04.357 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:04.357 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:04.619 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:04.619 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:04.880 nvme0n1 00:14:04.880 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:04.880 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:05.452 nvme1n2 00:14:05.452 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:05.452 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:05.452 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:05.452 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:05.452 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:05.452 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:05.452 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:05.452 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:05.452 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:05.713 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 3e553672-f9bb-403c-b128-3604ee5a3847 == \3\e\5\5\3\6\7\2\-\f\9\b\b\-\4\0\3\c\-\b\1\2\8\-\3\6\0\4\e\e\5\a\3\8\4\7 ]] 00:14:05.713 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:05.713 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:05.713 14:01:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:05.975 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ fbea2fa8-0bf4-477a-9867-b8700123f0ce == \f\b\e\a\2\f\a\8\-\0\b\f\4\-\4\7\7\a\-\9\8\6\7\-\b\8\7\0\0\1\2\3\f\0\c\e ]] 00:14:05.975 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.975 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:06.236 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 3e553672-f9bb-403c-b128-3604ee5a3847 00:14:06.236 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:06.236 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3E553672F9BB403CB1283604EE5A3847 00:14:06.236 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:06.236 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3E553672F9BB403CB1283604EE5A3847 00:14:06.236 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:06.236 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.236 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:06.236 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.236 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:06.236 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.236 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:06.236 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:06.236 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3E553672F9BB403CB1283604EE5A3847 00:14:06.236 [2024-10-30 14:01:04.494782] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:06.236 [2024-10-30 14:01:04.494813] subsystem.c:2151:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:06.236 [2024-10-30 14:01:04.494822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.236 request: 00:14:06.236 { 00:14:06.236 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:06.236 "namespace": { 00:14:06.236 "bdev_name": "invalid", 00:14:06.236 "nsid": 1, 00:14:06.236 "nguid": "3E553672F9BB403CB1283604EE5A3847", 00:14:06.236 "no_auto_visible": false 00:14:06.236 }, 00:14:06.236 "method": "nvmf_subsystem_add_ns", 00:14:06.236 "req_id": 1 00:14:06.236 } 00:14:06.236 Got JSON-RPC error response 00:14:06.236 response: 00:14:06.236 { 00:14:06.236 "code": -32602, 00:14:06.236 "message": "Invalid parameters" 00:14:06.236 } 00:14:06.236 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:06.236 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:06.236 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:06.236 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:06.236 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 3e553672-f9bb-403c-b128-3604ee5a3847 00:14:06.236 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:06.236 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3E553672F9BB403CB1283604EE5A3847 -i 00:14:06.498 14:01:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:08.414 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:08.414 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:08.414 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:08.675 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:08.675 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 968401 00:14:08.675 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 968401 ']' 00:14:08.675 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 968401 00:14:08.675 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:08.675 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:08.675 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 968401 00:14:08.675 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:08.675 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:08.675 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 968401' 00:14:08.675 killing process with pid 968401 00:14:08.675 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 968401 00:14:08.675 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 968401 00:14:08.937 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:09.198 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:09.198 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:09.198 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:09.198 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:09.198 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:09.198 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:09.198 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:09.198 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:09.198 rmmod nvme_tcp 00:14:09.198 rmmod nvme_fabrics 00:14:09.198 rmmod nvme_keyring 00:14:09.198 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:09.198 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:09.198 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:09.198 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 965902 ']' 00:14:09.198 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 965902 00:14:09.198 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 965902 ']' 00:14:09.198 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 965902 00:14:09.198 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:09.198 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.199 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 965902 00:14:09.199 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:09.199 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:09.199 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 965902' 00:14:09.199 killing process with pid 965902 00:14:09.199 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 965902 00:14:09.199 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 965902 00:14:09.460 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:09.460 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:09.460 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:09.460 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:09.460 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:09.460 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:09.460 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:09.460 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:09.460 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:09.460 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.460 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.460 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.376 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:11.376 00:14:11.376 real 0m28.074s 00:14:11.376 user 0m31.669s 00:14:11.376 sys 0m8.232s 00:14:11.377 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:11.377 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:11.377 ************************************ 00:14:11.377 END TEST nvmf_ns_masking 00:14:11.377 ************************************ 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:11.638 ************************************ 00:14:11.638 START TEST nvmf_nvme_cli 00:14:11.638 ************************************ 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:11.638 * Looking for test storage... 00:14:11.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:11.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.638 --rc genhtml_branch_coverage=1 00:14:11.638 --rc genhtml_function_coverage=1 00:14:11.638 --rc genhtml_legend=1 00:14:11.638 --rc geninfo_all_blocks=1 00:14:11.638 --rc geninfo_unexecuted_blocks=1 00:14:11.638 00:14:11.638 ' 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:11.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.638 --rc genhtml_branch_coverage=1 00:14:11.638 --rc genhtml_function_coverage=1 00:14:11.638 --rc genhtml_legend=1 00:14:11.638 --rc geninfo_all_blocks=1 00:14:11.638 --rc geninfo_unexecuted_blocks=1 00:14:11.638 00:14:11.638 ' 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:11.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.638 --rc genhtml_branch_coverage=1 00:14:11.638 --rc genhtml_function_coverage=1 00:14:11.638 --rc genhtml_legend=1 00:14:11.638 --rc geninfo_all_blocks=1 00:14:11.638 --rc geninfo_unexecuted_blocks=1 00:14:11.638 00:14:11.638 ' 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:11.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.638 --rc genhtml_branch_coverage=1 00:14:11.638 --rc genhtml_function_coverage=1 00:14:11.638 --rc genhtml_legend=1 00:14:11.638 --rc geninfo_all_blocks=1 00:14:11.638 --rc geninfo_unexecuted_blocks=1 00:14:11.638 00:14:11.638 ' 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.638 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.639 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:11.639 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:11.639 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:11.639 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:11.899 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.899 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.899 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.899 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.899 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.899 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.899 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:11.899 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.899 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:11.899 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:11.899 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:11.900 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:11.900 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.900 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.900 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:11.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:11.900 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:11.900 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:11.900 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:11.900 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:11.900 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:11.900 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:11.900 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:11.900 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:11.900 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.900 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:11.900 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:11.900 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:11.900 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.900 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:11.900 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.900 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:11.900 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:11.900 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:11.900 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:20.046 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:20.046 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:20.046 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.046 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:20.046 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:20.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:20.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:14:20.047 00:14:20.047 --- 10.0.0.2 ping statistics --- 00:14:20.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.047 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:20.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:20.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:14:20.047 00:14:20.047 --- 10.0.0.1 ping statistics --- 00:14:20.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.047 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=973796 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 973796 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 973796 ']' 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:20.047 14:01:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:20.047 [2024-10-30 14:01:17.537920] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:14:20.047 [2024-10-30 14:01:17.537989] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.047 [2024-10-30 14:01:17.635594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:20.047 [2024-10-30 14:01:17.689306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.047 [2024-10-30 14:01:17.689359] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.047 [2024-10-30 14:01:17.689370] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.047 [2024-10-30 14:01:17.689381] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.047 [2024-10-30 14:01:17.689390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.047 [2024-10-30 14:01:17.691806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.047 [2024-10-30 14:01:17.691967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.047 [2024-10-30 14:01:17.692126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:20.047 [2024-10-30 14:01:17.692128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:20.310 [2024-10-30 14:01:18.408692] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:20.310 Malloc0 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:20.310 Malloc1 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:20.310 [2024-10-30 14:01:18.520680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.310 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:20.573 00:14:20.573 Discovery Log Number of Records 2, Generation counter 2 00:14:20.573 =====Discovery Log Entry 0====== 00:14:20.573 trtype: tcp 00:14:20.573 adrfam: ipv4 00:14:20.573 subtype: current discovery subsystem 00:14:20.573 treq: not required 00:14:20.573 portid: 0 00:14:20.573 trsvcid: 4420 00:14:20.573 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:20.573 traddr: 10.0.0.2 00:14:20.573 eflags: explicit discovery connections, duplicate discovery information 00:14:20.573 sectype: none 00:14:20.573 =====Discovery Log Entry 1====== 00:14:20.573 trtype: tcp 00:14:20.573 adrfam: ipv4 00:14:20.573 subtype: nvme subsystem 00:14:20.573 treq: not required 00:14:20.573 portid: 0 00:14:20.573 trsvcid: 4420 00:14:20.573 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:20.573 traddr: 10.0.0.2 00:14:20.573 eflags: none 00:14:20.573 sectype: none 00:14:20.573 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:20.573 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:20.573 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:20.573 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:20.573 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:20.573 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:20.573 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:20.573 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:20.573 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:20.573 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:20.573 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:22.496 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:22.496 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:22.496 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:22.496 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:22.496 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:22.496 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:24.414 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:24.414 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:24.414 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:24.414 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:24.414 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:24.415 /dev/nvme0n2 ]] 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:24.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:24.415 rmmod nvme_tcp 00:14:24.415 rmmod nvme_fabrics 00:14:24.415 rmmod nvme_keyring 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 973796 ']' 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 973796 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 973796 ']' 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 973796 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 973796 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 973796' 00:14:24.415 killing process with pid 973796 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 973796 00:14:24.415 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 973796 00:14:24.678 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:24.678 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:24.678 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:24.678 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:24.678 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:24.678 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:24.678 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:24.678 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:24.678 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:24.678 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.678 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:24.678 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.596 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:26.596 00:14:26.596 real 0m15.130s 00:14:26.596 user 0m22.449s 00:14:26.596 sys 0m6.458s 00:14:26.596 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:26.596 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.596 ************************************ 00:14:26.596 END TEST nvmf_nvme_cli 00:14:26.596 ************************************ 00:14:26.596 14:01:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:26.596 14:01:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:26.596 14:01:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:26.596 14:01:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:26.596 14:01:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:26.858 ************************************ 00:14:26.858 START TEST nvmf_vfio_user 00:14:26.858 ************************************ 00:14:26.858 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:26.858 * Looking for test storage... 00:14:26.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:26.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.858 --rc genhtml_branch_coverage=1 00:14:26.858 --rc genhtml_function_coverage=1 00:14:26.858 --rc genhtml_legend=1 00:14:26.858 --rc geninfo_all_blocks=1 00:14:26.858 --rc geninfo_unexecuted_blocks=1 00:14:26.858 00:14:26.858 ' 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:26.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.858 --rc genhtml_branch_coverage=1 00:14:26.858 --rc genhtml_function_coverage=1 00:14:26.858 --rc genhtml_legend=1 00:14:26.858 --rc geninfo_all_blocks=1 00:14:26.858 --rc geninfo_unexecuted_blocks=1 00:14:26.858 00:14:26.858 ' 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:26.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.858 --rc genhtml_branch_coverage=1 00:14:26.858 --rc genhtml_function_coverage=1 00:14:26.858 --rc genhtml_legend=1 00:14:26.858 --rc geninfo_all_blocks=1 00:14:26.858 --rc geninfo_unexecuted_blocks=1 00:14:26.858 00:14:26.858 ' 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:26.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.858 --rc genhtml_branch_coverage=1 00:14:26.858 --rc genhtml_function_coverage=1 00:14:26.858 --rc genhtml_legend=1 00:14:26.858 --rc geninfo_all_blocks=1 00:14:26.858 --rc geninfo_unexecuted_blocks=1 00:14:26.858 00:14:26.858 ' 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.858 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:27.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=975438 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 975438' 00:14:27.121 Process pid: 975438 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 975438 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 975438 ']' 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.121 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:27.121 [2024-10-30 14:01:25.247367] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:14:27.121 [2024-10-30 14:01:25.247448] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.121 [2024-10-30 14:01:25.337735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:27.121 [2024-10-30 14:01:25.372200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.121 [2024-10-30 14:01:25.372231] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.121 [2024-10-30 14:01:25.372239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.121 [2024-10-30 14:01:25.372246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.121 [2024-10-30 14:01:25.372251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.121 [2024-10-30 14:01:25.373505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.121 [2024-10-30 14:01:25.373655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.121 [2024-10-30 14:01:25.373786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.121 [2024-10-30 14:01:25.373787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:28.066 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:28.066 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:28.066 14:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:29.009 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:29.009 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:29.009 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:29.009 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:29.009 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:29.009 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:29.271 Malloc1 00:14:29.271 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:29.533 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:29.533 14:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:29.794 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:29.794 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:29.794 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:30.056 Malloc2 00:14:30.056 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:30.318 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:30.318 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:30.583 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:30.583 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:30.583 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:30.583 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:30.583 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:30.583 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:30.583 [2024-10-30 14:01:28.787109] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:14:30.583 [2024-10-30 14:01:28.787155] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid976182 ] 00:14:30.583 [2024-10-30 14:01:28.826063] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:30.583 [2024-10-30 14:01:28.831353] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:30.583 [2024-10-30 14:01:28.831369] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fecc9d9d000 00:14:30.583 [2024-10-30 14:01:28.832355] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:30.583 [2024-10-30 14:01:28.833355] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:30.583 [2024-10-30 14:01:28.834359] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:30.583 [2024-10-30 14:01:28.835363] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:30.583 [2024-10-30 14:01:28.836370] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:30.583 [2024-10-30 14:01:28.837372] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:30.583 [2024-10-30 14:01:28.838378] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:30.583 [2024-10-30 14:01:28.839381] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:30.583 [2024-10-30 14:01:28.840390] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:30.583 [2024-10-30 14:01:28.840400] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fecc9d92000 00:14:30.583 [2024-10-30 14:01:28.841313] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:30.583 [2024-10-30 14:01:28.850761] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:30.584 [2024-10-30 14:01:28.850783] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:30.584 [2024-10-30 14:01:28.856479] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:30.584 [2024-10-30 14:01:28.856513] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:30.584 [2024-10-30 14:01:28.856573] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:30.584 [2024-10-30 14:01:28.856588] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:30.584 [2024-10-30 14:01:28.856592] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:30.584 [2024-10-30 14:01:28.857482] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:30.584 [2024-10-30 14:01:28.857490] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:30.584 [2024-10-30 14:01:28.857495] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:30.584 [2024-10-30 14:01:28.858485] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:30.584 [2024-10-30 14:01:28.858491] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:30.584 [2024-10-30 14:01:28.858497] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:30.584 [2024-10-30 14:01:28.859486] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:30.584 [2024-10-30 14:01:28.859493] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:30.584 [2024-10-30 14:01:28.860500] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:30.584 [2024-10-30 14:01:28.860506] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:30.584 [2024-10-30 14:01:28.860510] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:30.584 [2024-10-30 14:01:28.860514] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:30.584 [2024-10-30 14:01:28.860618] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:30.584 [2024-10-30 14:01:28.860622] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:30.584 [2024-10-30 14:01:28.860626] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:30.584 [2024-10-30 14:01:28.861504] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:30.584 [2024-10-30 14:01:28.862513] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:30.584 [2024-10-30 14:01:28.863515] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:30.584 [2024-10-30 14:01:28.864515] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:30.584 [2024-10-30 14:01:28.864567] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:30.584 [2024-10-30 14:01:28.865529] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:30.584 [2024-10-30 14:01:28.865541] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:30.584 [2024-10-30 14:01:28.865545] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:30.584 [2024-10-30 14:01:28.865562] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:30.584 [2024-10-30 14:01:28.865568] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:30.584 [2024-10-30 14:01:28.865580] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:30.584 [2024-10-30 14:01:28.865584] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:30.584 [2024-10-30 14:01:28.865587] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:30.584 [2024-10-30 14:01:28.865598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:30.584 [2024-10-30 14:01:28.865634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:30.584 [2024-10-30 14:01:28.865641] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:30.584 [2024-10-30 14:01:28.865647] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:30.584 [2024-10-30 14:01:28.865651] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:30.584 [2024-10-30 14:01:28.865654] nvme_ctrlr.c:2072:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:30.584 [2024-10-30 14:01:28.865658] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:30.584 [2024-10-30 14:01:28.865661] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:30.584 [2024-10-30 14:01:28.865664] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:30.584 [2024-10-30 14:01:28.865670] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:30.584 [2024-10-30 14:01:28.865678] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:30.584 [2024-10-30 14:01:28.865687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:30.584 [2024-10-30 14:01:28.865697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.584 [2024-10-30 14:01:28.865704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.584 [2024-10-30 14:01:28.865710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.584 [2024-10-30 14:01:28.865716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.584 [2024-10-30 14:01:28.865719] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:30.584 [2024-10-30 14:01:28.865724] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:30.584 [2024-10-30 14:01:28.865731] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:30.584 [2024-10-30 14:01:28.865740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:30.584 [2024-10-30 14:01:28.865754] nvme_ctrlr.c:3011:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:30.584 [2024-10-30 14:01:28.865759] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:30.584 [2024-10-30 14:01:28.865765] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:30.584 [2024-10-30 14:01:28.865769] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:30.584 [2024-10-30 14:01:28.865775] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:30.584 [2024-10-30 14:01:28.865785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:30.584 [2024-10-30 14:01:28.865829] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:30.584 [2024-10-30 14:01:28.865837] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:30.584 [2024-10-30 14:01:28.865843] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:30.584 [2024-10-30 14:01:28.865846] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:30.584 [2024-10-30 14:01:28.865848] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:30.585 [2024-10-30 14:01:28.865853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:30.585 [2024-10-30 14:01:28.865864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:30.585 [2024-10-30 14:01:28.865872] nvme_ctrlr.c:4699:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:30.585 [2024-10-30 14:01:28.865879] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:30.585 [2024-10-30 14:01:28.865885] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:30.585 [2024-10-30 14:01:28.865890] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:30.585 [2024-10-30 14:01:28.865893] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:30.585 [2024-10-30 14:01:28.865895] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:30.585 [2024-10-30 14:01:28.865900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:30.585 [2024-10-30 14:01:28.865914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:30.585 [2024-10-30 14:01:28.865924] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:30.585 [2024-10-30 14:01:28.865930] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:30.585 [2024-10-30 14:01:28.865935] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:30.585 [2024-10-30 14:01:28.865938] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:30.585 [2024-10-30 14:01:28.865940] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:30.585 [2024-10-30 14:01:28.865945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:30.585 [2024-10-30 14:01:28.865953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:30.585 [2024-10-30 14:01:28.865959] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:30.585 [2024-10-30 14:01:28.865964] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:30.585 [2024-10-30 14:01:28.865969] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:30.585 [2024-10-30 14:01:28.865973] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:30.585 [2024-10-30 14:01:28.865977] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:30.585 [2024-10-30 14:01:28.865982] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:30.585 [2024-10-30 14:01:28.865986] nvme_ctrlr.c:3111:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:30.585 [2024-10-30 14:01:28.865989] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:30.585 [2024-10-30 14:01:28.865993] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:30.585 [2024-10-30 14:01:28.866007] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:30.585 [2024-10-30 14:01:28.866018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:30.585 [2024-10-30 14:01:28.866026] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:30.585 [2024-10-30 14:01:28.866035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:30.585 [2024-10-30 14:01:28.866042] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:30.585 [2024-10-30 14:01:28.866055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:30.585 [2024-10-30 14:01:28.866062] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:30.585 [2024-10-30 14:01:28.866071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:30.585 [2024-10-30 14:01:28.866080] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:30.585 [2024-10-30 14:01:28.866083] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:30.585 [2024-10-30 14:01:28.866086] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:30.585 [2024-10-30 14:01:28.866089] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:30.585 [2024-10-30 14:01:28.866091] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:30.585 [2024-10-30 14:01:28.866095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:30.585 [2024-10-30 14:01:28.866101] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:30.585 [2024-10-30 14:01:28.866104] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:30.585 [2024-10-30 14:01:28.866106] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:30.585 [2024-10-30 14:01:28.866111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:30.585 [2024-10-30 14:01:28.866116] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:30.585 [2024-10-30 14:01:28.866119] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:30.585 [2024-10-30 14:01:28.866121] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:30.585 [2024-10-30 14:01:28.866126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:30.585 [2024-10-30 14:01:28.866132] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:30.585 [2024-10-30 14:01:28.866135] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:30.585 [2024-10-30 14:01:28.866139] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:30.585 [2024-10-30 14:01:28.866143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:30.585 [2024-10-30 14:01:28.866149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:30.585 [2024-10-30 14:01:28.866157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:30.585 [2024-10-30 14:01:28.866164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:30.585 [2024-10-30 14:01:28.866169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:30.585 ===================================================== 00:14:30.585 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:30.585 ===================================================== 00:14:30.585 Controller Capabilities/Features 00:14:30.585 ================================ 00:14:30.585 Vendor ID: 4e58 00:14:30.585 Subsystem Vendor ID: 4e58 00:14:30.585 Serial Number: SPDK1 00:14:30.585 Model Number: SPDK bdev Controller 00:14:30.585 Firmware Version: 25.01 00:14:30.585 Recommended Arb Burst: 6 00:14:30.585 IEEE OUI Identifier: 8d 6b 50 00:14:30.585 Multi-path I/O 00:14:30.585 May have multiple subsystem ports: Yes 00:14:30.585 May have multiple controllers: Yes 00:14:30.585 Associated with SR-IOV VF: No 00:14:30.585 Max Data Transfer Size: 131072 00:14:30.585 Max Number of Namespaces: 32 00:14:30.585 Max Number of I/O Queues: 127 00:14:30.585 NVMe Specification Version (VS): 1.3 00:14:30.585 NVMe Specification Version (Identify): 1.3 00:14:30.585 Maximum Queue Entries: 256 00:14:30.585 Contiguous Queues Required: Yes 00:14:30.585 Arbitration Mechanisms Supported 00:14:30.585 Weighted Round Robin: Not Supported 00:14:30.585 Vendor Specific: Not Supported 00:14:30.585 Reset Timeout: 15000 ms 00:14:30.586 Doorbell Stride: 4 bytes 00:14:30.586 NVM Subsystem Reset: Not Supported 00:14:30.586 Command Sets Supported 00:14:30.586 NVM Command Set: Supported 00:14:30.586 Boot Partition: Not Supported 00:14:30.586 Memory Page Size Minimum: 4096 bytes 00:14:30.586 Memory Page Size Maximum: 4096 bytes 00:14:30.586 Persistent Memory Region: Not Supported 00:14:30.586 Optional Asynchronous Events Supported 00:14:30.586 Namespace Attribute Notices: Supported 00:14:30.586 Firmware Activation Notices: Not Supported 00:14:30.586 ANA Change Notices: Not Supported 00:14:30.586 PLE Aggregate Log Change Notices: Not Supported 00:14:30.586 LBA Status Info Alert Notices: Not Supported 00:14:30.586 EGE Aggregate Log Change Notices: Not Supported 00:14:30.586 Normal NVM Subsystem Shutdown event: Not Supported 00:14:30.586 Zone Descriptor Change Notices: Not Supported 00:14:30.586 Discovery Log Change Notices: Not Supported 00:14:30.586 Controller Attributes 00:14:30.586 128-bit Host Identifier: Supported 00:14:30.586 Non-Operational Permissive Mode: Not Supported 00:14:30.586 NVM Sets: Not Supported 00:14:30.586 Read Recovery Levels: Not Supported 00:14:30.586 Endurance Groups: Not Supported 00:14:30.586 Predictable Latency Mode: Not Supported 00:14:30.586 Traffic Based Keep ALive: Not Supported 00:14:30.586 Namespace Granularity: Not Supported 00:14:30.586 SQ Associations: Not Supported 00:14:30.586 UUID List: Not Supported 00:14:30.586 Multi-Domain Subsystem: Not Supported 00:14:30.586 Fixed Capacity Management: Not Supported 00:14:30.586 Variable Capacity Management: Not Supported 00:14:30.586 Delete Endurance Group: Not Supported 00:14:30.586 Delete NVM Set: Not Supported 00:14:30.586 Extended LBA Formats Supported: Not Supported 00:14:30.586 Flexible Data Placement Supported: Not Supported 00:14:30.586 00:14:30.586 Controller Memory Buffer Support 00:14:30.586 ================================ 00:14:30.586 Supported: No 00:14:30.586 00:14:30.586 Persistent Memory Region Support 00:14:30.586 ================================ 00:14:30.586 Supported: No 00:14:30.586 00:14:30.586 Admin Command Set Attributes 00:14:30.586 ============================ 00:14:30.586 Security Send/Receive: Not Supported 00:14:30.586 Format NVM: Not Supported 00:14:30.586 Firmware Activate/Download: Not Supported 00:14:30.586 Namespace Management: Not Supported 00:14:30.586 Device Self-Test: Not Supported 00:14:30.586 Directives: Not Supported 00:14:30.586 NVMe-MI: Not Supported 00:14:30.586 Virtualization Management: Not Supported 00:14:30.586 Doorbell Buffer Config: Not Supported 00:14:30.586 Get LBA Status Capability: Not Supported 00:14:30.586 Command & Feature Lockdown Capability: Not Supported 00:14:30.586 Abort Command Limit: 4 00:14:30.586 Async Event Request Limit: 4 00:14:30.586 Number of Firmware Slots: N/A 00:14:30.586 Firmware Slot 1 Read-Only: N/A 00:14:30.586 Firmware Activation Without Reset: N/A 00:14:30.586 Multiple Update Detection Support: N/A 00:14:30.586 Firmware Update Granularity: No Information Provided 00:14:30.586 Per-Namespace SMART Log: No 00:14:30.586 Asymmetric Namespace Access Log Page: Not Supported 00:14:30.586 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:30.586 Command Effects Log Page: Supported 00:14:30.586 Get Log Page Extended Data: Supported 00:14:30.586 Telemetry Log Pages: Not Supported 00:14:30.586 Persistent Event Log Pages: Not Supported 00:14:30.586 Supported Log Pages Log Page: May Support 00:14:30.586 Commands Supported & Effects Log Page: Not Supported 00:14:30.586 Feature Identifiers & Effects Log Page:May Support 00:14:30.586 NVMe-MI Commands & Effects Log Page: May Support 00:14:30.586 Data Area 4 for Telemetry Log: Not Supported 00:14:30.586 Error Log Page Entries Supported: 128 00:14:30.586 Keep Alive: Supported 00:14:30.586 Keep Alive Granularity: 10000 ms 00:14:30.586 00:14:30.586 NVM Command Set Attributes 00:14:30.586 ========================== 00:14:30.586 Submission Queue Entry Size 00:14:30.586 Max: 64 00:14:30.586 Min: 64 00:14:30.586 Completion Queue Entry Size 00:14:30.586 Max: 16 00:14:30.586 Min: 16 00:14:30.586 Number of Namespaces: 32 00:14:30.586 Compare Command: Supported 00:14:30.586 Write Uncorrectable Command: Not Supported 00:14:30.586 Dataset Management Command: Supported 00:14:30.586 Write Zeroes Command: Supported 00:14:30.586 Set Features Save Field: Not Supported 00:14:30.586 Reservations: Not Supported 00:14:30.586 Timestamp: Not Supported 00:14:30.586 Copy: Supported 00:14:30.586 Volatile Write Cache: Present 00:14:30.586 Atomic Write Unit (Normal): 1 00:14:30.586 Atomic Write Unit (PFail): 1 00:14:30.586 Atomic Compare & Write Unit: 1 00:14:30.586 Fused Compare & Write: Supported 00:14:30.586 Scatter-Gather List 00:14:30.586 SGL Command Set: Supported (Dword aligned) 00:14:30.586 SGL Keyed: Not Supported 00:14:30.586 SGL Bit Bucket Descriptor: Not Supported 00:14:30.586 SGL Metadata Pointer: Not Supported 00:14:30.586 Oversized SGL: Not Supported 00:14:30.586 SGL Metadata Address: Not Supported 00:14:30.586 SGL Offset: Not Supported 00:14:30.586 Transport SGL Data Block: Not Supported 00:14:30.586 Replay Protected Memory Block: Not Supported 00:14:30.586 00:14:30.586 Firmware Slot Information 00:14:30.586 ========================= 00:14:30.586 Active slot: 1 00:14:30.586 Slot 1 Firmware Revision: 25.01 00:14:30.586 00:14:30.586 00:14:30.586 Commands Supported and Effects 00:14:30.586 ============================== 00:14:30.586 Admin Commands 00:14:30.586 -------------- 00:14:30.586 Get Log Page (02h): Supported 00:14:30.586 Identify (06h): Supported 00:14:30.586 Abort (08h): Supported 00:14:30.586 Set Features (09h): Supported 00:14:30.586 Get Features (0Ah): Supported 00:14:30.586 Asynchronous Event Request (0Ch): Supported 00:14:30.586 Keep Alive (18h): Supported 00:14:30.586 I/O Commands 00:14:30.586 ------------ 00:14:30.586 Flush (00h): Supported LBA-Change 00:14:30.586 Write (01h): Supported LBA-Change 00:14:30.586 Read (02h): Supported 00:14:30.586 Compare (05h): Supported 00:14:30.586 Write Zeroes (08h): Supported LBA-Change 00:14:30.586 Dataset Management (09h): Supported LBA-Change 00:14:30.586 Copy (19h): Supported LBA-Change 00:14:30.586 00:14:30.586 Error Log 00:14:30.586 ========= 00:14:30.586 00:14:30.586 Arbitration 00:14:30.586 =========== 00:14:30.586 Arbitration Burst: 1 00:14:30.586 00:14:30.586 Power Management 00:14:30.586 ================ 00:14:30.586 Number of Power States: 1 00:14:30.586 Current Power State: Power State #0 00:14:30.586 Power State #0: 00:14:30.586 Max Power: 0.00 W 00:14:30.586 Non-Operational State: Operational 00:14:30.586 Entry Latency: Not Reported 00:14:30.586 Exit Latency: Not Reported 00:14:30.586 Relative Read Throughput: 0 00:14:30.586 Relative Read Latency: 0 00:14:30.587 Relative Write Throughput: 0 00:14:30.587 Relative Write Latency: 0 00:14:30.587 Idle Power: Not Reported 00:14:30.587 Active Power: Not Reported 00:14:30.587 Non-Operational Permissive Mode: Not Supported 00:14:30.587 00:14:30.587 Health Information 00:14:30.587 ================== 00:14:30.587 Critical Warnings: 00:14:30.587 Available Spare Space: OK 00:14:30.587 Temperature: OK 00:14:30.587 Device Reliability: OK 00:14:30.587 Read Only: No 00:14:30.587 Volatile Memory Backup: OK 00:14:30.587 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:30.587 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:30.587 Available Spare: 0% 00:14:30.587 Available Sp[2024-10-30 14:01:28.866240] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:30.587 [2024-10-30 14:01:28.866248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:30.587 [2024-10-30 14:01:28.866270] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:30.587 [2024-10-30 14:01:28.866277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.587 [2024-10-30 14:01:28.866282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.587 [2024-10-30 14:01:28.866286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.587 [2024-10-30 14:01:28.866291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.587 [2024-10-30 14:01:28.869754] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:30.587 [2024-10-30 14:01:28.869764] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:30.587 [2024-10-30 14:01:28.870556] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:30.587 [2024-10-30 14:01:28.870598] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:30.587 [2024-10-30 14:01:28.870605] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:30.587 [2024-10-30 14:01:28.871565] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:30.587 [2024-10-30 14:01:28.871575] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:30.587 [2024-10-30 14:01:28.871628] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:30.587 [2024-10-30 14:01:28.872584] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:30.849 are Threshold: 0% 00:14:30.849 Life Percentage Used: 0% 00:14:30.849 Data Units Read: 0 00:14:30.849 Data Units Written: 0 00:14:30.849 Host Read Commands: 0 00:14:30.849 Host Write Commands: 0 00:14:30.849 Controller Busy Time: 0 minutes 00:14:30.849 Power Cycles: 0 00:14:30.849 Power On Hours: 0 hours 00:14:30.849 Unsafe Shutdowns: 0 00:14:30.849 Unrecoverable Media Errors: 0 00:14:30.849 Lifetime Error Log Entries: 0 00:14:30.849 Warning Temperature Time: 0 minutes 00:14:30.849 Critical Temperature Time: 0 minutes 00:14:30.849 00:14:30.849 Number of Queues 00:14:30.849 ================ 00:14:30.849 Number of I/O Submission Queues: 127 00:14:30.849 Number of I/O Completion Queues: 127 00:14:30.849 00:14:30.849 Active Namespaces 00:14:30.849 ================= 00:14:30.849 Namespace ID:1 00:14:30.849 Error Recovery Timeout: Unlimited 00:14:30.849 Command Set Identifier: NVM (00h) 00:14:30.849 Deallocate: Supported 00:14:30.849 Deallocated/Unwritten Error: Not Supported 00:14:30.849 Deallocated Read Value: Unknown 00:14:30.849 Deallocate in Write Zeroes: Not Supported 00:14:30.849 Deallocated Guard Field: 0xFFFF 00:14:30.849 Flush: Supported 00:14:30.849 Reservation: Supported 00:14:30.849 Namespace Sharing Capabilities: Multiple Controllers 00:14:30.849 Size (in LBAs): 131072 (0GiB) 00:14:30.849 Capacity (in LBAs): 131072 (0GiB) 00:14:30.849 Utilization (in LBAs): 131072 (0GiB) 00:14:30.849 NGUID: 42055AD96F60415E9E8FC7CBCA2DAE7D 00:14:30.849 UUID: 42055ad9-6f60-415e-9e8f-c7cbca2dae7d 00:14:30.849 Thin Provisioning: Not Supported 00:14:30.849 Per-NS Atomic Units: Yes 00:14:30.849 Atomic Boundary Size (Normal): 0 00:14:30.849 Atomic Boundary Size (PFail): 0 00:14:30.849 Atomic Boundary Offset: 0 00:14:30.849 Maximum Single Source Range Length: 65535 00:14:30.849 Maximum Copy Length: 65535 00:14:30.849 Maximum Source Range Count: 1 00:14:30.849 NGUID/EUI64 Never Reused: No 00:14:30.849 Namespace Write Protected: No 00:14:30.849 Number of LBA Formats: 1 00:14:30.849 Current LBA Format: LBA Format #00 00:14:30.849 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:30.849 00:14:30.849 14:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:30.849 [2024-10-30 14:01:29.060431] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:36.140 Initializing NVMe Controllers 00:14:36.140 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:36.140 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:36.140 Initialization complete. Launching workers. 00:14:36.140 ======================================================== 00:14:36.140 Latency(us) 00:14:36.140 Device Information : IOPS MiB/s Average min max 00:14:36.140 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40013.37 156.30 3198.80 855.18 9754.53 00:14:36.140 ======================================================== 00:14:36.140 Total : 40013.37 156.30 3198.80 855.18 9754.53 00:14:36.140 00:14:36.140 [2024-10-30 14:01:34.080226] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:36.140 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:36.140 [2024-10-30 14:01:34.270067] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:41.436 Initializing NVMe Controllers 00:14:41.436 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:41.436 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:41.436 Initialization complete. Launching workers. 00:14:41.436 ======================================================== 00:14:41.436 Latency(us) 00:14:41.436 Device Information : IOPS MiB/s Average min max 00:14:41.436 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16027.18 62.61 7991.99 5988.23 15963.51 00:14:41.436 ======================================================== 00:14:41.436 Total : 16027.18 62.61 7991.99 5988.23 15963.51 00:14:41.436 00:14:41.436 [2024-10-30 14:01:39.311430] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:41.436 14:01:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:41.436 [2024-10-30 14:01:39.512285] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:46.734 [2024-10-30 14:01:44.573935] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:46.734 Initializing NVMe Controllers 00:14:46.734 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:46.734 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:46.734 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:46.734 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:46.734 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:46.734 Initialization complete. Launching workers. 00:14:46.734 Starting thread on core 2 00:14:46.734 Starting thread on core 3 00:14:46.734 Starting thread on core 1 00:14:46.734 14:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:46.734 [2024-10-30 14:01:44.823081] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:50.039 [2024-10-30 14:01:47.882323] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:50.039 Initializing NVMe Controllers 00:14:50.039 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:50.039 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:50.039 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:50.039 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:50.039 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:50.039 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:50.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:50.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:50.039 Initialization complete. Launching workers. 00:14:50.039 Starting thread on core 1 with urgent priority queue 00:14:50.039 Starting thread on core 2 with urgent priority queue 00:14:50.039 Starting thread on core 3 with urgent priority queue 00:14:50.039 Starting thread on core 0 with urgent priority queue 00:14:50.039 SPDK bdev Controller (SPDK1 ) core 0: 15709.00 IO/s 6.37 secs/100000 ios 00:14:50.039 SPDK bdev Controller (SPDK1 ) core 1: 8253.67 IO/s 12.12 secs/100000 ios 00:14:50.039 SPDK bdev Controller (SPDK1 ) core 2: 13977.00 IO/s 7.15 secs/100000 ios 00:14:50.039 SPDK bdev Controller (SPDK1 ) core 3: 8516.00 IO/s 11.74 secs/100000 ios 00:14:50.039 ======================================================== 00:14:50.039 00:14:50.040 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:50.040 [2024-10-30 14:01:48.127135] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:50.040 Initializing NVMe Controllers 00:14:50.040 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:50.040 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:50.040 Namespace ID: 1 size: 0GB 00:14:50.040 Initialization complete. 00:14:50.040 INFO: using host memory buffer for IO 00:14:50.040 Hello world! 00:14:50.040 [2024-10-30 14:01:48.161334] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:50.040 14:01:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:50.302 [2024-10-30 14:01:48.396278] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:51.246 Initializing NVMe Controllers 00:14:51.246 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:51.246 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:51.246 Initialization complete. Launching workers. 00:14:51.246 submit (in ns) avg, min, max = 6333.9, 2823.3, 4003065.0 00:14:51.246 complete (in ns) avg, min, max = 16803.0, 1638.3, 4005371.7 00:14:51.246 00:14:51.246 Submit histogram 00:14:51.246 ================ 00:14:51.246 Range in us Cumulative Count 00:14:51.246 2.813 - 2.827: 0.0496% ( 10) 00:14:51.246 2.827 - 2.840: 1.3974% ( 272) 00:14:51.246 2.840 - 2.853: 3.6619% ( 457) 00:14:51.246 2.853 - 2.867: 7.7350% ( 822) 00:14:51.247 2.867 - 2.880: 12.6951% ( 1001) 00:14:51.247 2.880 - 2.893: 19.6125% ( 1396) 00:14:51.247 2.893 - 2.907: 26.1979% ( 1329) 00:14:51.247 2.907 - 2.920: 31.6436% ( 1099) 00:14:51.247 2.920 - 2.933: 36.5988% ( 1000) 00:14:51.247 2.933 - 2.947: 40.4638% ( 780) 00:14:51.247 2.947 - 2.960: 45.1167% ( 939) 00:14:51.247 2.960 - 2.973: 52.2125% ( 1432) 00:14:51.247 2.973 - 2.987: 61.9593% ( 1967) 00:14:51.247 2.987 - 3.000: 70.9628% ( 1817) 00:14:51.247 3.000 - 3.013: 79.0892% ( 1640) 00:14:51.247 3.013 - 3.027: 86.0512% ( 1405) 00:14:51.247 3.027 - 3.040: 91.4920% ( 1098) 00:14:51.247 3.040 - 3.053: 94.9606% ( 700) 00:14:51.247 3.053 - 3.067: 96.9922% ( 410) 00:14:51.247 3.067 - 3.080: 98.0774% ( 219) 00:14:51.247 3.080 - 3.093: 98.6472% ( 115) 00:14:51.247 3.093 - 3.107: 99.0090% ( 73) 00:14:51.247 3.107 - 3.120: 99.2567% ( 50) 00:14:51.247 3.120 - 3.133: 99.3905% ( 27) 00:14:51.247 3.133 - 3.147: 99.4549% ( 13) 00:14:51.247 3.147 - 3.160: 99.4896% ( 7) 00:14:51.247 3.173 - 3.187: 99.4946% ( 1) 00:14:51.247 3.187 - 3.200: 99.5045% ( 2) 00:14:51.247 3.267 - 3.280: 99.5094% ( 1) 00:14:51.247 3.280 - 3.293: 99.5144% ( 1) 00:14:51.247 3.347 - 3.360: 99.5193% ( 1) 00:14:51.247 3.467 - 3.493: 99.5392% ( 4) 00:14:51.247 3.573 - 3.600: 99.5441% ( 1) 00:14:51.247 3.600 - 3.627: 99.5491% ( 1) 00:14:51.247 3.707 - 3.733: 99.5540% ( 1) 00:14:51.247 3.733 - 3.760: 99.5590% ( 1) 00:14:51.247 3.840 - 3.867: 99.5639% ( 1) 00:14:51.247 4.107 - 4.133: 99.5739% ( 2) 00:14:51.247 4.133 - 4.160: 99.5788% ( 1) 00:14:51.247 4.160 - 4.187: 99.5838% ( 1) 00:14:51.247 4.293 - 4.320: 99.5887% ( 1) 00:14:51.247 4.320 - 4.347: 99.5937% ( 1) 00:14:51.247 4.347 - 4.373: 99.6036% ( 2) 00:14:51.247 4.427 - 4.453: 99.6185% ( 3) 00:14:51.247 4.533 - 4.560: 99.6234% ( 1) 00:14:51.247 4.613 - 4.640: 99.6333% ( 2) 00:14:51.247 4.693 - 4.720: 99.6432% ( 2) 00:14:51.247 4.720 - 4.747: 99.6531% ( 2) 00:14:51.247 4.747 - 4.773: 99.6630% ( 2) 00:14:51.247 4.800 - 4.827: 99.6680% ( 1) 00:14:51.247 4.827 - 4.853: 99.6730% ( 1) 00:14:51.247 4.853 - 4.880: 99.6829% ( 2) 00:14:51.247 4.907 - 4.933: 99.6878% ( 1) 00:14:51.247 4.933 - 4.960: 99.6977% ( 2) 00:14:51.247 5.013 - 5.040: 99.7076% ( 2) 00:14:51.247 5.040 - 5.067: 99.7126% ( 1) 00:14:51.247 5.093 - 5.120: 99.7176% ( 1) 00:14:51.247 5.173 - 5.200: 99.7225% ( 1) 00:14:51.247 5.307 - 5.333: 99.7275% ( 1) 00:14:51.247 5.893 - 5.920: 99.7324% ( 1) 00:14:51.247 6.000 - 6.027: 99.7374% ( 1) 00:14:51.247 6.080 - 6.107: 99.7423% ( 1) 00:14:51.247 6.107 - 6.133: 99.7473% ( 1) 00:14:51.247 6.133 - 6.160: 99.7522% ( 1) 00:14:51.247 6.160 - 6.187: 99.7572% ( 1) 00:14:51.247 6.240 - 6.267: 99.7622% ( 1) 00:14:51.247 6.267 - 6.293: 99.7770% ( 3) 00:14:51.247 6.320 - 6.347: 99.7869% ( 2) 00:14:51.247 6.347 - 6.373: 99.7919% ( 1) 00:14:51.247 6.373 - 6.400: 99.7968% ( 1) 00:14:51.247 6.427 - 6.453: 99.8018% ( 1) 00:14:51.247 6.533 - 6.560: 99.8117% ( 2) 00:14:51.247 6.587 - 6.613: 99.8167% ( 1) 00:14:51.247 6.613 - 6.640: 99.8216% ( 1) 00:14:51.247 6.667 - 6.693: 99.8266% ( 1) 00:14:51.247 6.693 - 6.720: 99.8365% ( 2) 00:14:51.247 6.800 - 6.827: 99.8464% ( 2) 00:14:51.247 6.880 - 6.933: 99.8513% ( 1) 00:14:51.247 6.933 - 6.987: 99.8613% ( 2) 00:14:51.247 [2024-10-30 14:01:49.416925] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:51.247 6.987 - 7.040: 99.8662% ( 1) 00:14:51.247 7.040 - 7.093: 99.8712% ( 1) 00:14:51.247 7.093 - 7.147: 99.8761% ( 1) 00:14:51.247 7.307 - 7.360: 99.8860% ( 2) 00:14:51.247 7.360 - 7.413: 99.8910% ( 1) 00:14:51.247 7.520 - 7.573: 99.8959% ( 1) 00:14:51.247 7.680 - 7.733: 99.9009% ( 1) 00:14:51.247 7.893 - 7.947: 99.9059% ( 1) 00:14:51.247 8.693 - 8.747: 99.9108% ( 1) 00:14:51.247 9.067 - 9.120: 99.9158% ( 1) 00:14:51.247 3986.773 - 4014.080: 100.0000% ( 17) 00:14:51.247 00:14:51.247 Complete histogram 00:14:51.247 ================== 00:14:51.247 Range in us Cumulative Count 00:14:51.247 1.633 - 1.640: 0.0050% ( 1) 00:14:51.247 1.640 - 1.647: 0.0099% ( 1) 00:14:51.247 1.647 - 1.653: 0.1288% ( 24) 00:14:51.247 1.653 - 1.660: 1.0505% ( 186) 00:14:51.247 1.660 - 1.667: 1.1991% ( 30) 00:14:51.247 1.667 - 1.673: 1.3428% ( 29) 00:14:51.247 1.673 - 1.680: 1.4766% ( 27) 00:14:51.247 1.680 - 1.687: 1.5262% ( 10) 00:14:51.247 1.687 - 1.693: 1.5708% ( 9) 00:14:51.247 1.693 - 1.700: 1.6005% ( 6) 00:14:51.247 1.700 - 1.707: 1.6104% ( 2) 00:14:51.247 1.707 - 1.720: 1.6996% ( 18) 00:14:51.247 1.720 - 1.733: 41.2963% ( 7991) 00:14:51.247 1.733 - 1.747: 67.0036% ( 5188) 00:14:51.247 1.747 - 1.760: 81.8344% ( 2993) 00:14:51.247 1.760 - 1.773: 84.8422% ( 607) 00:14:51.247 1.773 - 1.787: 85.8629% ( 206) 00:14:51.247 1.787 - 1.800: 89.9906% ( 833) 00:14:51.247 1.800 - 1.813: 95.0994% ( 1031) 00:14:51.247 1.813 - 1.827: 98.2508% ( 636) 00:14:51.247 1.827 - 1.840: 99.2419% ( 200) 00:14:51.247 1.840 - 1.853: 99.4153% ( 35) 00:14:51.247 1.853 - 1.867: 99.4252% ( 2) 00:14:51.247 1.920 - 1.933: 99.4302% ( 1) 00:14:51.247 1.973 - 1.987: 99.4351% ( 1) 00:14:51.247 1.987 - 2.000: 99.4401% ( 1) 00:14:51.247 2.027 - 2.040: 99.4450% ( 1) 00:14:51.247 2.040 - 2.053: 99.4500% ( 1) 00:14:51.247 2.067 - 2.080: 99.4549% ( 1) 00:14:51.247 2.280 - 2.293: 99.4599% ( 1) 00:14:51.247 3.293 - 3.307: 99.4648% ( 1) 00:14:51.247 3.400 - 3.413: 99.4748% ( 2) 00:14:51.247 3.440 - 3.467: 99.4797% ( 1) 00:14:51.247 3.467 - 3.493: 99.4847% ( 1) 00:14:51.247 3.520 - 3.547: 99.4896% ( 1) 00:14:51.247 3.547 - 3.573: 99.5045% ( 3) 00:14:51.247 4.373 - 4.400: 99.5094% ( 1) 00:14:51.247 4.560 - 4.587: 99.5144% ( 1) 00:14:51.247 4.853 - 4.880: 99.5193% ( 1) 00:14:51.247 4.987 - 5.013: 99.5243% ( 1) 00:14:51.247 5.040 - 5.067: 99.5342% ( 2) 00:14:51.247 5.093 - 5.120: 99.5392% ( 1) 00:14:51.247 5.120 - 5.147: 99.5441% ( 1) 00:14:51.247 5.147 - 5.173: 99.5491% ( 1) 00:14:51.247 5.333 - 5.360: 99.5540% ( 1) 00:14:51.247 5.573 - 5.600: 99.5590% ( 1) 00:14:51.247 5.600 - 5.627: 99.5639% ( 1) 00:14:51.247 5.627 - 5.653: 99.5689% ( 1) 00:14:51.247 5.787 - 5.813: 99.5739% ( 1) 00:14:51.247 5.920 - 5.947: 99.5788% ( 1) 00:14:51.247 6.213 - 6.240: 99.5887% ( 2) 00:14:51.247 6.427 - 6.453: 99.5937% ( 1) 00:14:51.247 6.453 - 6.480: 99.5986% ( 1) 00:14:51.247 6.480 - 6.507: 99.6036% ( 1) 00:14:51.247 6.560 - 6.587: 99.6085% ( 1) 00:14:51.247 6.640 - 6.667: 99.6135% ( 1) 00:14:51.247 6.747 - 6.773: 99.6185% ( 1) 00:14:51.247 132.267 - 133.120: 99.6234% ( 1) 00:14:51.247 3986.773 - 4014.080: 100.0000% ( 76) 00:14:51.247 00:14:51.247 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:51.247 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:51.247 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:51.247 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:51.247 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:51.512 [ 00:14:51.512 { 00:14:51.512 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:51.512 "subtype": "Discovery", 00:14:51.512 "listen_addresses": [], 00:14:51.512 "allow_any_host": true, 00:14:51.512 "hosts": [] 00:14:51.512 }, 00:14:51.512 { 00:14:51.512 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:51.512 "subtype": "NVMe", 00:14:51.512 "listen_addresses": [ 00:14:51.512 { 00:14:51.512 "trtype": "VFIOUSER", 00:14:51.512 "adrfam": "IPv4", 00:14:51.512 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:51.512 "trsvcid": "0" 00:14:51.512 } 00:14:51.512 ], 00:14:51.512 "allow_any_host": true, 00:14:51.512 "hosts": [], 00:14:51.512 "serial_number": "SPDK1", 00:14:51.512 "model_number": "SPDK bdev Controller", 00:14:51.512 "max_namespaces": 32, 00:14:51.512 "min_cntlid": 1, 00:14:51.512 "max_cntlid": 65519, 00:14:51.512 "namespaces": [ 00:14:51.512 { 00:14:51.512 "nsid": 1, 00:14:51.512 "bdev_name": "Malloc1", 00:14:51.512 "name": "Malloc1", 00:14:51.512 "nguid": "42055AD96F60415E9E8FC7CBCA2DAE7D", 00:14:51.512 "uuid": "42055ad9-6f60-415e-9e8f-c7cbca2dae7d" 00:14:51.512 } 00:14:51.512 ] 00:14:51.512 }, 00:14:51.512 { 00:14:51.512 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:51.512 "subtype": "NVMe", 00:14:51.512 "listen_addresses": [ 00:14:51.512 { 00:14:51.512 "trtype": "VFIOUSER", 00:14:51.512 "adrfam": "IPv4", 00:14:51.512 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:51.512 "trsvcid": "0" 00:14:51.512 } 00:14:51.512 ], 00:14:51.512 "allow_any_host": true, 00:14:51.512 "hosts": [], 00:14:51.512 "serial_number": "SPDK2", 00:14:51.512 "model_number": "SPDK bdev Controller", 00:14:51.512 "max_namespaces": 32, 00:14:51.513 "min_cntlid": 1, 00:14:51.513 "max_cntlid": 65519, 00:14:51.513 "namespaces": [ 00:14:51.513 { 00:14:51.513 "nsid": 1, 00:14:51.513 "bdev_name": "Malloc2", 00:14:51.513 "name": "Malloc2", 00:14:51.513 "nguid": "95CB6DEE0D5244B98C89A29472A7D809", 00:14:51.513 "uuid": "95cb6dee-0d52-44b9-8c89-a29472a7d809" 00:14:51.513 } 00:14:51.513 ] 00:14:51.513 } 00:14:51.513 ] 00:14:51.513 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:51.513 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=980319 00:14:51.513 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:51.513 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:51.513 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:51.513 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:51.513 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:51.513 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:51.513 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:51.513 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:51.513 [2024-10-30 14:01:49.799922] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:51.777 Malloc3 00:14:51.777 14:01:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:51.777 [2024-10-30 14:01:49.988164] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:51.777 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:51.777 Asynchronous Event Request test 00:14:51.777 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:51.777 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:51.777 Registering asynchronous event callbacks... 00:14:51.777 Starting namespace attribute notice tests for all controllers... 00:14:51.777 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:51.777 aer_cb - Changed Namespace 00:14:51.777 Cleaning up... 00:14:52.038 [ 00:14:52.038 { 00:14:52.038 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:52.038 "subtype": "Discovery", 00:14:52.038 "listen_addresses": [], 00:14:52.038 "allow_any_host": true, 00:14:52.038 "hosts": [] 00:14:52.038 }, 00:14:52.038 { 00:14:52.038 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:52.038 "subtype": "NVMe", 00:14:52.038 "listen_addresses": [ 00:14:52.038 { 00:14:52.038 "trtype": "VFIOUSER", 00:14:52.038 "adrfam": "IPv4", 00:14:52.038 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:52.038 "trsvcid": "0" 00:14:52.038 } 00:14:52.038 ], 00:14:52.038 "allow_any_host": true, 00:14:52.038 "hosts": [], 00:14:52.038 "serial_number": "SPDK1", 00:14:52.038 "model_number": "SPDK bdev Controller", 00:14:52.038 "max_namespaces": 32, 00:14:52.038 "min_cntlid": 1, 00:14:52.038 "max_cntlid": 65519, 00:14:52.038 "namespaces": [ 00:14:52.038 { 00:14:52.038 "nsid": 1, 00:14:52.038 "bdev_name": "Malloc1", 00:14:52.038 "name": "Malloc1", 00:14:52.038 "nguid": "42055AD96F60415E9E8FC7CBCA2DAE7D", 00:14:52.038 "uuid": "42055ad9-6f60-415e-9e8f-c7cbca2dae7d" 00:14:52.038 }, 00:14:52.038 { 00:14:52.038 "nsid": 2, 00:14:52.038 "bdev_name": "Malloc3", 00:14:52.038 "name": "Malloc3", 00:14:52.038 "nguid": "779CACAAA5EB46F0A131792EDCC56D25", 00:14:52.038 "uuid": "779cacaa-a5eb-46f0-a131-792edcc56d25" 00:14:52.038 } 00:14:52.038 ] 00:14:52.038 }, 00:14:52.038 { 00:14:52.038 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:52.038 "subtype": "NVMe", 00:14:52.038 "listen_addresses": [ 00:14:52.038 { 00:14:52.038 "trtype": "VFIOUSER", 00:14:52.038 "adrfam": "IPv4", 00:14:52.038 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:52.038 "trsvcid": "0" 00:14:52.038 } 00:14:52.038 ], 00:14:52.038 "allow_any_host": true, 00:14:52.038 "hosts": [], 00:14:52.038 "serial_number": "SPDK2", 00:14:52.038 "model_number": "SPDK bdev Controller", 00:14:52.038 "max_namespaces": 32, 00:14:52.038 "min_cntlid": 1, 00:14:52.038 "max_cntlid": 65519, 00:14:52.038 "namespaces": [ 00:14:52.038 { 00:14:52.038 "nsid": 1, 00:14:52.038 "bdev_name": "Malloc2", 00:14:52.038 "name": "Malloc2", 00:14:52.038 "nguid": "95CB6DEE0D5244B98C89A29472A7D809", 00:14:52.038 "uuid": "95cb6dee-0d52-44b9-8c89-a29472a7d809" 00:14:52.038 } 00:14:52.038 ] 00:14:52.039 } 00:14:52.039 ] 00:14:52.039 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 980319 00:14:52.039 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:52.039 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:52.039 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:52.039 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:52.039 [2024-10-30 14:01:50.230772] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:14:52.039 [2024-10-30 14:01:50.230814] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid980356 ] 00:14:52.039 [2024-10-30 14:01:50.271055] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:52.039 [2024-10-30 14:01:50.276272] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:52.039 [2024-10-30 14:01:50.276291] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9be9640000 00:14:52.039 [2024-10-30 14:01:50.277277] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.039 [2024-10-30 14:01:50.278278] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.039 [2024-10-30 14:01:50.279286] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.039 [2024-10-30 14:01:50.280296] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:52.039 [2024-10-30 14:01:50.281305] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:52.039 [2024-10-30 14:01:50.282309] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.039 [2024-10-30 14:01:50.283315] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:52.039 [2024-10-30 14:01:50.284317] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.039 [2024-10-30 14:01:50.285323] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:52.039 [2024-10-30 14:01:50.285333] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9be9635000 00:14:52.039 [2024-10-30 14:01:50.286248] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:52.039 [2024-10-30 14:01:50.295623] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:52.039 [2024-10-30 14:01:50.295649] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:52.039 [2024-10-30 14:01:50.300725] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:52.039 [2024-10-30 14:01:50.300761] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:52.039 [2024-10-30 14:01:50.300827] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:52.039 [2024-10-30 14:01:50.300842] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:52.039 [2024-10-30 14:01:50.300846] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:52.039 [2024-10-30 14:01:50.301725] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:52.039 [2024-10-30 14:01:50.301732] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:52.039 [2024-10-30 14:01:50.301737] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:52.039 [2024-10-30 14:01:50.302731] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:52.039 [2024-10-30 14:01:50.302738] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:52.039 [2024-10-30 14:01:50.302744] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:52.039 [2024-10-30 14:01:50.303735] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:52.039 [2024-10-30 14:01:50.303741] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:52.039 [2024-10-30 14:01:50.304752] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:52.039 [2024-10-30 14:01:50.304759] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:52.039 [2024-10-30 14:01:50.304764] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:52.039 [2024-10-30 14:01:50.304769] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:52.039 [2024-10-30 14:01:50.304874] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:52.039 [2024-10-30 14:01:50.304877] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:52.039 [2024-10-30 14:01:50.304881] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:52.039 [2024-10-30 14:01:50.305753] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:52.039 [2024-10-30 14:01:50.306761] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:52.039 [2024-10-30 14:01:50.307762] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:52.039 [2024-10-30 14:01:50.308767] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:52.039 [2024-10-30 14:01:50.308808] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:52.039 [2024-10-30 14:01:50.309777] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:52.039 [2024-10-30 14:01:50.309786] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:52.039 [2024-10-30 14:01:50.309789] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:52.039 [2024-10-30 14:01:50.309805] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:52.039 [2024-10-30 14:01:50.309811] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:52.039 [2024-10-30 14:01:50.309824] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:52.039 [2024-10-30 14:01:50.309828] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:52.039 [2024-10-30 14:01:50.309831] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.039 [2024-10-30 14:01:50.309843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:52.039 [2024-10-30 14:01:50.318753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:52.039 [2024-10-30 14:01:50.318763] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:52.039 [2024-10-30 14:01:50.318767] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:52.039 [2024-10-30 14:01:50.318771] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:52.039 [2024-10-30 14:01:50.318774] nvme_ctrlr.c:2072:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:52.039 [2024-10-30 14:01:50.318778] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:52.039 [2024-10-30 14:01:50.318783] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:52.039 [2024-10-30 14:01:50.318787] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:52.039 [2024-10-30 14:01:50.318793] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:52.039 [2024-10-30 14:01:50.318801] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:52.039 [2024-10-30 14:01:50.326750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:52.039 [2024-10-30 14:01:50.326763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.039 [2024-10-30 14:01:50.326770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.039 [2024-10-30 14:01:50.326776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.039 [2024-10-30 14:01:50.326782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.039 [2024-10-30 14:01:50.326786] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:52.039 [2024-10-30 14:01:50.326791] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:52.039 [2024-10-30 14:01:50.326798] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:52.039 [2024-10-30 14:01:50.334751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:52.039 [2024-10-30 14:01:50.334759] nvme_ctrlr.c:3011:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:52.039 [2024-10-30 14:01:50.334763] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:52.039 [2024-10-30 14:01:50.334769] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:52.039 [2024-10-30 14:01:50.334773] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:52.039 [2024-10-30 14:01:50.334779] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:52.300 [2024-10-30 14:01:50.342752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:52.300 [2024-10-30 14:01:50.342801] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:52.300 [2024-10-30 14:01:50.342808] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:52.300 [2024-10-30 14:01:50.342814] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:52.300 [2024-10-30 14:01:50.342817] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:52.300 [2024-10-30 14:01:50.342820] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.300 [2024-10-30 14:01:50.342825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:52.300 [2024-10-30 14:01:50.350749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:52.300 [2024-10-30 14:01:50.350759] nvme_ctrlr.c:4699:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:52.300 [2024-10-30 14:01:50.350770] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:52.300 [2024-10-30 14:01:50.350776] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:52.301 [2024-10-30 14:01:50.350781] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:52.301 [2024-10-30 14:01:50.350784] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:52.301 [2024-10-30 14:01:50.350787] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.301 [2024-10-30 14:01:50.350791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:52.301 [2024-10-30 14:01:50.358749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:52.301 [2024-10-30 14:01:50.358760] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:52.301 [2024-10-30 14:01:50.358767] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:52.301 [2024-10-30 14:01:50.358772] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:52.301 [2024-10-30 14:01:50.358775] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:52.301 [2024-10-30 14:01:50.358777] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.301 [2024-10-30 14:01:50.358782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:52.301 [2024-10-30 14:01:50.366752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:52.301 [2024-10-30 14:01:50.366759] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:52.301 [2024-10-30 14:01:50.366764] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:52.301 [2024-10-30 14:01:50.366771] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:52.301 [2024-10-30 14:01:50.366776] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:52.301 [2024-10-30 14:01:50.366779] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:52.301 [2024-10-30 14:01:50.366784] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:52.301 [2024-10-30 14:01:50.366788] nvme_ctrlr.c:3111:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:52.301 [2024-10-30 14:01:50.366791] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:52.301 [2024-10-30 14:01:50.366795] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:52.301 [2024-10-30 14:01:50.366812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:52.301 [2024-10-30 14:01:50.374751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:52.301 [2024-10-30 14:01:50.374761] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:52.301 [2024-10-30 14:01:50.382750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:52.301 [2024-10-30 14:01:50.382759] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:52.301 [2024-10-30 14:01:50.390750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:52.301 [2024-10-30 14:01:50.390760] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:52.301 [2024-10-30 14:01:50.398751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:52.301 [2024-10-30 14:01:50.398764] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:52.301 [2024-10-30 14:01:50.398767] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:52.301 [2024-10-30 14:01:50.398770] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:52.301 [2024-10-30 14:01:50.398772] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:52.301 [2024-10-30 14:01:50.398775] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:52.301 [2024-10-30 14:01:50.398779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:52.301 [2024-10-30 14:01:50.398785] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:52.301 [2024-10-30 14:01:50.398788] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:52.301 [2024-10-30 14:01:50.398790] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.301 [2024-10-30 14:01:50.398794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:52.301 [2024-10-30 14:01:50.398800] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:52.301 [2024-10-30 14:01:50.398803] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:52.301 [2024-10-30 14:01:50.398805] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.301 [2024-10-30 14:01:50.398809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:52.301 [2024-10-30 14:01:50.398816] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:52.301 [2024-10-30 14:01:50.398819] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:52.301 [2024-10-30 14:01:50.398822] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.301 [2024-10-30 14:01:50.398826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:52.301 [2024-10-30 14:01:50.406750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:52.301 [2024-10-30 14:01:50.406761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:52.301 [2024-10-30 14:01:50.406769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:52.301 [2024-10-30 14:01:50.406776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:52.301 ===================================================== 00:14:52.301 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:52.301 ===================================================== 00:14:52.301 Controller Capabilities/Features 00:14:52.301 ================================ 00:14:52.301 Vendor ID: 4e58 00:14:52.301 Subsystem Vendor ID: 4e58 00:14:52.301 Serial Number: SPDK2 00:14:52.301 Model Number: SPDK bdev Controller 00:14:52.301 Firmware Version: 25.01 00:14:52.301 Recommended Arb Burst: 6 00:14:52.301 IEEE OUI Identifier: 8d 6b 50 00:14:52.301 Multi-path I/O 00:14:52.301 May have multiple subsystem ports: Yes 00:14:52.301 May have multiple controllers: Yes 00:14:52.301 Associated with SR-IOV VF: No 00:14:52.301 Max Data Transfer Size: 131072 00:14:52.301 Max Number of Namespaces: 32 00:14:52.301 Max Number of I/O Queues: 127 00:14:52.301 NVMe Specification Version (VS): 1.3 00:14:52.301 NVMe Specification Version (Identify): 1.3 00:14:52.301 Maximum Queue Entries: 256 00:14:52.301 Contiguous Queues Required: Yes 00:14:52.301 Arbitration Mechanisms Supported 00:14:52.301 Weighted Round Robin: Not Supported 00:14:52.301 Vendor Specific: Not Supported 00:14:52.301 Reset Timeout: 15000 ms 00:14:52.301 Doorbell Stride: 4 bytes 00:14:52.301 NVM Subsystem Reset: Not Supported 00:14:52.301 Command Sets Supported 00:14:52.301 NVM Command Set: Supported 00:14:52.301 Boot Partition: Not Supported 00:14:52.301 Memory Page Size Minimum: 4096 bytes 00:14:52.301 Memory Page Size Maximum: 4096 bytes 00:14:52.301 Persistent Memory Region: Not Supported 00:14:52.301 Optional Asynchronous Events Supported 00:14:52.301 Namespace Attribute Notices: Supported 00:14:52.301 Firmware Activation Notices: Not Supported 00:14:52.301 ANA Change Notices: Not Supported 00:14:52.301 PLE Aggregate Log Change Notices: Not Supported 00:14:52.301 LBA Status Info Alert Notices: Not Supported 00:14:52.301 EGE Aggregate Log Change Notices: Not Supported 00:14:52.301 Normal NVM Subsystem Shutdown event: Not Supported 00:14:52.301 Zone Descriptor Change Notices: Not Supported 00:14:52.301 Discovery Log Change Notices: Not Supported 00:14:52.301 Controller Attributes 00:14:52.301 128-bit Host Identifier: Supported 00:14:52.301 Non-Operational Permissive Mode: Not Supported 00:14:52.301 NVM Sets: Not Supported 00:14:52.301 Read Recovery Levels: Not Supported 00:14:52.301 Endurance Groups: Not Supported 00:14:52.301 Predictable Latency Mode: Not Supported 00:14:52.301 Traffic Based Keep ALive: Not Supported 00:14:52.301 Namespace Granularity: Not Supported 00:14:52.301 SQ Associations: Not Supported 00:14:52.301 UUID List: Not Supported 00:14:52.301 Multi-Domain Subsystem: Not Supported 00:14:52.301 Fixed Capacity Management: Not Supported 00:14:52.301 Variable Capacity Management: Not Supported 00:14:52.301 Delete Endurance Group: Not Supported 00:14:52.301 Delete NVM Set: Not Supported 00:14:52.301 Extended LBA Formats Supported: Not Supported 00:14:52.301 Flexible Data Placement Supported: Not Supported 00:14:52.301 00:14:52.301 Controller Memory Buffer Support 00:14:52.301 ================================ 00:14:52.301 Supported: No 00:14:52.301 00:14:52.301 Persistent Memory Region Support 00:14:52.301 ================================ 00:14:52.301 Supported: No 00:14:52.301 00:14:52.301 Admin Command Set Attributes 00:14:52.301 ============================ 00:14:52.301 Security Send/Receive: Not Supported 00:14:52.301 Format NVM: Not Supported 00:14:52.301 Firmware Activate/Download: Not Supported 00:14:52.301 Namespace Management: Not Supported 00:14:52.301 Device Self-Test: Not Supported 00:14:52.301 Directives: Not Supported 00:14:52.301 NVMe-MI: Not Supported 00:14:52.301 Virtualization Management: Not Supported 00:14:52.301 Doorbell Buffer Config: Not Supported 00:14:52.301 Get LBA Status Capability: Not Supported 00:14:52.301 Command & Feature Lockdown Capability: Not Supported 00:14:52.301 Abort Command Limit: 4 00:14:52.301 Async Event Request Limit: 4 00:14:52.301 Number of Firmware Slots: N/A 00:14:52.301 Firmware Slot 1 Read-Only: N/A 00:14:52.301 Firmware Activation Without Reset: N/A 00:14:52.301 Multiple Update Detection Support: N/A 00:14:52.301 Firmware Update Granularity: No Information Provided 00:14:52.301 Per-Namespace SMART Log: No 00:14:52.301 Asymmetric Namespace Access Log Page: Not Supported 00:14:52.301 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:52.301 Command Effects Log Page: Supported 00:14:52.301 Get Log Page Extended Data: Supported 00:14:52.301 Telemetry Log Pages: Not Supported 00:14:52.301 Persistent Event Log Pages: Not Supported 00:14:52.302 Supported Log Pages Log Page: May Support 00:14:52.302 Commands Supported & Effects Log Page: Not Supported 00:14:52.302 Feature Identifiers & Effects Log Page:May Support 00:14:52.302 NVMe-MI Commands & Effects Log Page: May Support 00:14:52.302 Data Area 4 for Telemetry Log: Not Supported 00:14:52.302 Error Log Page Entries Supported: 128 00:14:52.302 Keep Alive: Supported 00:14:52.302 Keep Alive Granularity: 10000 ms 00:14:52.302 00:14:52.302 NVM Command Set Attributes 00:14:52.302 ========================== 00:14:52.302 Submission Queue Entry Size 00:14:52.302 Max: 64 00:14:52.302 Min: 64 00:14:52.302 Completion Queue Entry Size 00:14:52.302 Max: 16 00:14:52.302 Min: 16 00:14:52.302 Number of Namespaces: 32 00:14:52.302 Compare Command: Supported 00:14:52.302 Write Uncorrectable Command: Not Supported 00:14:52.302 Dataset Management Command: Supported 00:14:52.302 Write Zeroes Command: Supported 00:14:52.302 Set Features Save Field: Not Supported 00:14:52.302 Reservations: Not Supported 00:14:52.302 Timestamp: Not Supported 00:14:52.302 Copy: Supported 00:14:52.302 Volatile Write Cache: Present 00:14:52.302 Atomic Write Unit (Normal): 1 00:14:52.302 Atomic Write Unit (PFail): 1 00:14:52.302 Atomic Compare & Write Unit: 1 00:14:52.302 Fused Compare & Write: Supported 00:14:52.302 Scatter-Gather List 00:14:52.302 SGL Command Set: Supported (Dword aligned) 00:14:52.302 SGL Keyed: Not Supported 00:14:52.302 SGL Bit Bucket Descriptor: Not Supported 00:14:52.302 SGL Metadata Pointer: Not Supported 00:14:52.302 Oversized SGL: Not Supported 00:14:52.302 SGL Metadata Address: Not Supported 00:14:52.302 SGL Offset: Not Supported 00:14:52.302 Transport SGL Data Block: Not Supported 00:14:52.302 Replay Protected Memory Block: Not Supported 00:14:52.302 00:14:52.302 Firmware Slot Information 00:14:52.302 ========================= 00:14:52.302 Active slot: 1 00:14:52.302 Slot 1 Firmware Revision: 25.01 00:14:52.302 00:14:52.302 00:14:52.302 Commands Supported and Effects 00:14:52.302 ============================== 00:14:52.302 Admin Commands 00:14:52.302 -------------- 00:14:52.302 Get Log Page (02h): Supported 00:14:52.302 Identify (06h): Supported 00:14:52.302 Abort (08h): Supported 00:14:52.302 Set Features (09h): Supported 00:14:52.302 Get Features (0Ah): Supported 00:14:52.302 Asynchronous Event Request (0Ch): Supported 00:14:52.302 Keep Alive (18h): Supported 00:14:52.302 I/O Commands 00:14:52.302 ------------ 00:14:52.302 Flush (00h): Supported LBA-Change 00:14:52.302 Write (01h): Supported LBA-Change 00:14:52.302 Read (02h): Supported 00:14:52.302 Compare (05h): Supported 00:14:52.302 Write Zeroes (08h): Supported LBA-Change 00:14:52.302 Dataset Management (09h): Supported LBA-Change 00:14:52.302 Copy (19h): Supported LBA-Change 00:14:52.302 00:14:52.302 Error Log 00:14:52.302 ========= 00:14:52.302 00:14:52.302 Arbitration 00:14:52.302 =========== 00:14:52.302 Arbitration Burst: 1 00:14:52.302 00:14:52.302 Power Management 00:14:52.302 ================ 00:14:52.302 Number of Power States: 1 00:14:52.302 Current Power State: Power State #0 00:14:52.302 Power State #0: 00:14:52.302 Max Power: 0.00 W 00:14:52.302 Non-Operational State: Operational 00:14:52.302 Entry Latency: Not Reported 00:14:52.302 Exit Latency: Not Reported 00:14:52.302 Relative Read Throughput: 0 00:14:52.302 Relative Read Latency: 0 00:14:52.302 Relative Write Throughput: 0 00:14:52.302 Relative Write Latency: 0 00:14:52.302 Idle Power: Not Reported 00:14:52.302 Active Power: Not Reported 00:14:52.302 Non-Operational Permissive Mode: Not Supported 00:14:52.302 00:14:52.302 Health Information 00:14:52.302 ================== 00:14:52.302 Critical Warnings: 00:14:52.302 Available Spare Space: OK 00:14:52.302 Temperature: OK 00:14:52.302 Device Reliability: OK 00:14:52.302 Read Only: No 00:14:52.302 Volatile Memory Backup: OK 00:14:52.302 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:52.302 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:52.302 Available Spare: 0% 00:14:52.302 Available Sp[2024-10-30 14:01:50.406848] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:52.302 [2024-10-30 14:01:50.414750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:52.302 [2024-10-30 14:01:50.414775] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:52.302 [2024-10-30 14:01:50.414782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.302 [2024-10-30 14:01:50.414787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.302 [2024-10-30 14:01:50.414791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.302 [2024-10-30 14:01:50.414796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.302 [2024-10-30 14:01:50.414827] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:52.302 [2024-10-30 14:01:50.414836] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:52.302 [2024-10-30 14:01:50.415840] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:52.302 [2024-10-30 14:01:50.415888] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:52.302 [2024-10-30 14:01:50.415896] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:52.302 [2024-10-30 14:01:50.416842] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:52.302 [2024-10-30 14:01:50.416852] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:52.302 [2024-10-30 14:01:50.416910] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:52.302 [2024-10-30 14:01:50.417868] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:52.302 are Threshold: 0% 00:14:52.302 Life Percentage Used: 0% 00:14:52.302 Data Units Read: 0 00:14:52.302 Data Units Written: 0 00:14:52.302 Host Read Commands: 0 00:14:52.302 Host Write Commands: 0 00:14:52.302 Controller Busy Time: 0 minutes 00:14:52.302 Power Cycles: 0 00:14:52.302 Power On Hours: 0 hours 00:14:52.302 Unsafe Shutdowns: 0 00:14:52.302 Unrecoverable Media Errors: 0 00:14:52.302 Lifetime Error Log Entries: 0 00:14:52.302 Warning Temperature Time: 0 minutes 00:14:52.302 Critical Temperature Time: 0 minutes 00:14:52.302 00:14:52.302 Number of Queues 00:14:52.302 ================ 00:14:52.302 Number of I/O Submission Queues: 127 00:14:52.302 Number of I/O Completion Queues: 127 00:14:52.302 00:14:52.302 Active Namespaces 00:14:52.302 ================= 00:14:52.302 Namespace ID:1 00:14:52.302 Error Recovery Timeout: Unlimited 00:14:52.302 Command Set Identifier: NVM (00h) 00:14:52.302 Deallocate: Supported 00:14:52.302 Deallocated/Unwritten Error: Not Supported 00:14:52.302 Deallocated Read Value: Unknown 00:14:52.302 Deallocate in Write Zeroes: Not Supported 00:14:52.302 Deallocated Guard Field: 0xFFFF 00:14:52.302 Flush: Supported 00:14:52.302 Reservation: Supported 00:14:52.302 Namespace Sharing Capabilities: Multiple Controllers 00:14:52.302 Size (in LBAs): 131072 (0GiB) 00:14:52.302 Capacity (in LBAs): 131072 (0GiB) 00:14:52.302 Utilization (in LBAs): 131072 (0GiB) 00:14:52.302 NGUID: 95CB6DEE0D5244B98C89A29472A7D809 00:14:52.302 UUID: 95cb6dee-0d52-44b9-8c89-a29472a7d809 00:14:52.302 Thin Provisioning: Not Supported 00:14:52.302 Per-NS Atomic Units: Yes 00:14:52.302 Atomic Boundary Size (Normal): 0 00:14:52.302 Atomic Boundary Size (PFail): 0 00:14:52.302 Atomic Boundary Offset: 0 00:14:52.302 Maximum Single Source Range Length: 65535 00:14:52.302 Maximum Copy Length: 65535 00:14:52.302 Maximum Source Range Count: 1 00:14:52.302 NGUID/EUI64 Never Reused: No 00:14:52.302 Namespace Write Protected: No 00:14:52.302 Number of LBA Formats: 1 00:14:52.302 Current LBA Format: LBA Format #00 00:14:52.302 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:52.302 00:14:52.302 14:01:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:52.563 [2024-10-30 14:01:50.606130] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:57.850 Initializing NVMe Controllers 00:14:57.850 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:57.850 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:57.850 Initialization complete. Launching workers. 00:14:57.850 ======================================================== 00:14:57.850 Latency(us) 00:14:57.850 Device Information : IOPS MiB/s Average min max 00:14:57.850 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40012.79 156.30 3199.53 845.47 9768.29 00:14:57.850 ======================================================== 00:14:57.850 Total : 40012.79 156.30 3199.53 845.47 9768.29 00:14:57.850 00:14:57.850 [2024-10-30 14:01:55.711927] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:57.850 14:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:57.850 [2024-10-30 14:01:55.902578] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:03.140 Initializing NVMe Controllers 00:15:03.140 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:03.140 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:03.140 Initialization complete. Launching workers. 00:15:03.140 ======================================================== 00:15:03.140 Latency(us) 00:15:03.140 Device Information : IOPS MiB/s Average min max 00:15:03.140 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39970.83 156.14 3202.19 850.63 8764.85 00:15:03.140 ======================================================== 00:15:03.140 Total : 39970.83 156.14 3202.19 850.63 8764.85 00:15:03.140 00:15:03.140 [2024-10-30 14:02:00.921345] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:03.140 14:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:03.140 [2024-10-30 14:02:01.132498] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:08.585 [2024-10-30 14:02:06.273835] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:08.585 Initializing NVMe Controllers 00:15:08.585 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:08.585 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:08.585 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:08.585 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:08.585 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:08.585 Initialization complete. Launching workers. 00:15:08.585 Starting thread on core 2 00:15:08.585 Starting thread on core 3 00:15:08.585 Starting thread on core 1 00:15:08.585 14:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:08.585 [2024-10-30 14:02:06.512128] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:11.889 [2024-10-30 14:02:09.567175] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:11.889 Initializing NVMe Controllers 00:15:11.889 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:11.889 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:11.889 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:11.889 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:11.889 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:11.889 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:11.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:11.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:11.889 Initialization complete. Launching workers. 00:15:11.889 Starting thread on core 1 with urgent priority queue 00:15:11.889 Starting thread on core 2 with urgent priority queue 00:15:11.889 Starting thread on core 3 with urgent priority queue 00:15:11.889 Starting thread on core 0 with urgent priority queue 00:15:11.889 SPDK bdev Controller (SPDK2 ) core 0: 9072.67 IO/s 11.02 secs/100000 ios 00:15:11.889 SPDK bdev Controller (SPDK2 ) core 1: 10600.33 IO/s 9.43 secs/100000 ios 00:15:11.889 SPDK bdev Controller (SPDK2 ) core 2: 15705.00 IO/s 6.37 secs/100000 ios 00:15:11.889 SPDK bdev Controller (SPDK2 ) core 3: 15885.67 IO/s 6.29 secs/100000 ios 00:15:11.889 ======================================================== 00:15:11.889 00:15:11.889 14:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:11.889 [2024-10-30 14:02:09.797973] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:11.889 Initializing NVMe Controllers 00:15:11.889 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:11.889 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:11.889 Namespace ID: 1 size: 0GB 00:15:11.889 Initialization complete. 00:15:11.889 INFO: using host memory buffer for IO 00:15:11.889 Hello world! 00:15:11.889 [2024-10-30 14:02:09.808037] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:11.889 14:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:11.889 [2024-10-30 14:02:10.041780] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:13.276 Initializing NVMe Controllers 00:15:13.276 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:13.276 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:13.276 Initialization complete. Launching workers. 00:15:13.276 submit (in ns) avg, min, max = 6336.1, 2852.5, 4000283.3 00:15:13.276 complete (in ns) avg, min, max = 14859.1, 1629.2, 3999296.7 00:15:13.276 00:15:13.276 Submit histogram 00:15:13.276 ================ 00:15:13.276 Range in us Cumulative Count 00:15:13.276 2.840 - 2.853: 0.0049% ( 1) 00:15:13.276 2.853 - 2.867: 0.4318% ( 87) 00:15:13.276 2.867 - 2.880: 1.7761% ( 274) 00:15:13.276 2.880 - 2.893: 5.3282% ( 724) 00:15:13.276 2.893 - 2.907: 10.6859% ( 1092) 00:15:13.277 2.907 - 2.920: 16.4802% ( 1181) 00:15:13.277 2.920 - 2.933: 20.6309% ( 846) 00:15:13.277 2.933 - 2.947: 25.9837% ( 1091) 00:15:13.277 2.947 - 2.960: 31.0078% ( 1024) 00:15:13.277 2.960 - 2.973: 36.9395% ( 1209) 00:15:13.277 2.973 - 2.987: 42.4394% ( 1121) 00:15:13.277 2.987 - 3.000: 48.1798% ( 1170) 00:15:13.277 3.000 - 3.013: 55.3724% ( 1466) 00:15:13.277 3.013 - 3.027: 64.6257% ( 1886) 00:15:13.277 3.027 - 3.040: 73.0645% ( 1720) 00:15:13.277 3.040 - 3.053: 80.5760% ( 1531) 00:15:13.277 3.053 - 3.067: 86.9885% ( 1307) 00:15:13.277 3.067 - 3.080: 92.4198% ( 1107) 00:15:13.277 3.080 - 3.093: 95.8002% ( 689) 00:15:13.277 3.093 - 3.107: 97.7235% ( 392) 00:15:13.277 3.107 - 3.120: 98.6557% ( 190) 00:15:13.277 3.120 - 3.133: 99.1218% ( 95) 00:15:13.277 3.133 - 3.147: 99.2542% ( 27) 00:15:13.277 3.147 - 3.160: 99.3622% ( 22) 00:15:13.277 3.160 - 3.173: 99.3916% ( 6) 00:15:13.277 3.173 - 3.187: 99.3965% ( 1) 00:15:13.277 3.280 - 3.293: 99.4014% ( 1) 00:15:13.277 3.293 - 3.307: 99.4112% ( 2) 00:15:13.277 3.320 - 3.333: 99.4260% ( 3) 00:15:13.277 3.413 - 3.440: 99.4407% ( 3) 00:15:13.277 3.467 - 3.493: 99.4505% ( 2) 00:15:13.277 3.493 - 3.520: 99.4554% ( 1) 00:15:13.277 3.520 - 3.547: 99.4750% ( 4) 00:15:13.277 3.547 - 3.573: 99.4897% ( 3) 00:15:13.277 3.573 - 3.600: 99.5094% ( 4) 00:15:13.277 3.600 - 3.627: 99.5143% ( 1) 00:15:13.277 3.627 - 3.653: 99.5290% ( 3) 00:15:13.277 3.680 - 3.707: 99.5388% ( 2) 00:15:13.277 3.707 - 3.733: 99.5486% ( 2) 00:15:13.277 3.733 - 3.760: 99.5584% ( 2) 00:15:13.277 3.760 - 3.787: 99.5781% ( 4) 00:15:13.277 3.787 - 3.813: 99.5977% ( 4) 00:15:13.277 3.813 - 3.840: 99.6075% ( 2) 00:15:13.277 3.840 - 3.867: 99.6124% ( 1) 00:15:13.277 3.867 - 3.893: 99.6222% ( 2) 00:15:13.277 3.893 - 3.920: 99.6320% ( 2) 00:15:13.277 3.920 - 3.947: 99.6418% ( 2) 00:15:13.277 4.107 - 4.133: 99.6467% ( 1) 00:15:13.277 4.133 - 4.160: 99.6517% ( 1) 00:15:13.277 4.187 - 4.213: 99.6566% ( 1) 00:15:13.277 4.400 - 4.427: 99.6615% ( 1) 00:15:13.277 4.453 - 4.480: 99.6664% ( 1) 00:15:13.277 4.560 - 4.587: 99.6713% ( 1) 00:15:13.277 5.013 - 5.040: 99.6811% ( 2) 00:15:13.277 5.040 - 5.067: 99.6860% ( 1) 00:15:13.277 5.067 - 5.093: 99.6958% ( 2) 00:15:13.277 5.093 - 5.120: 99.7007% ( 1) 00:15:13.277 5.120 - 5.147: 99.7154% ( 3) 00:15:13.277 5.147 - 5.173: 99.7203% ( 1) 00:15:13.277 5.493 - 5.520: 99.7252% ( 1) 00:15:13.277 5.627 - 5.653: 99.7302% ( 1) 00:15:13.277 5.653 - 5.680: 99.7351% ( 1) 00:15:13.277 5.680 - 5.707: 99.7400% ( 1) 00:15:13.277 5.707 - 5.733: 99.7449% ( 1) 00:15:13.277 5.733 - 5.760: 99.7547% ( 2) 00:15:13.277 5.760 - 5.787: 99.7596% ( 1) 00:15:13.277 5.787 - 5.813: 99.7645% ( 1) 00:15:13.277 5.813 - 5.840: 99.7694% ( 1) 00:15:13.277 5.867 - 5.893: 99.7792% ( 2) 00:15:13.277 5.893 - 5.920: 99.7841% ( 1) 00:15:13.277 5.920 - 5.947: 99.7890% ( 1) 00:15:13.277 6.000 - 6.027: 99.7939% ( 1) 00:15:13.277 6.080 - 6.107: 99.7988% ( 1) 00:15:13.277 6.107 - 6.133: 99.8087% ( 2) 00:15:13.277 6.160 - 6.187: 99.8234% ( 3) 00:15:13.277 6.187 - 6.213: 99.8332% ( 2) 00:15:13.277 6.240 - 6.267: 99.8430% ( 2) 00:15:13.277 6.267 - 6.293: 99.8479% ( 1) 00:15:13.277 6.453 - 6.480: 99.8528% ( 1) 00:15:13.277 6.480 - 6.507: 99.8577% ( 1) 00:15:13.277 6.720 - 6.747: 99.8626% ( 1) 00:15:13.277 6.933 - 6.987: 99.8773% ( 3) 00:15:13.277 6.987 - 7.040: 99.8872% ( 2) 00:15:13.277 7.253 - 7.307: 99.8921% ( 1) 00:15:13.277 7.307 - 7.360: 99.8970% ( 1) 00:15:13.277 7.360 - 7.413: 99.9019% ( 1) 00:15:13.277 7.413 - 7.467: 99.9068% ( 1) 00:15:13.277 7.947 - 8.000: 99.9117% ( 1) 00:15:13.277 11.093 - 11.147: 99.9166% ( 1) 00:15:13.277 3986.773 - 4014.080: 100.0000% ( 17) 00:15:13.277 00:15:13.277 Complete histogram 00:15:13.277 ================== 00:15:13.277 Range in us Cumulative Count 00:15:13.277 1.627 - 1.633: 0.0049% ( 1) 00:15:13.277 1.640 - 1.647: 0.5397% ( 109) 00:15:13.277 1.647 - 1.653: 1.2315% ( 141) 00:15:13.277 1.653 - 1.660: 1.3296% ( 20) 00:15:13.277 1.660 - 1.667: 1.6927% ( 74) 00:15:13.277 1.667 - 1.673: 1.8202% ( 26) 00:15:13.277 1.673 - 1.680: 1.8399% ( 4) 00:15:13.277 1.680 - 1.687: 1.8546% ( 3) 00:15:13.277 1.687 - 1.693: 17.3535% ( 3159) 00:15:13.277 1.693 - 1.700: 37.8815% ( 4184) 00:15:13.277 1.700 - 1.707: 40.4131% ( 516) 00:15:13.277 1.707 - 1.720: 72.9516% ( 6632) 00:15:13.277 1.720 - 1.733: 82.0037% ( 1845) 00:15:13.277 1.733 - 1.747: 83.4118% ( 287) 00:15:13.277 1.747 - 1.760: 85.9680% ( 521) 00:15:13.277 1.760 - 1.773: 90.6830% ( 961) 00:15:13.277 1.773 - 1.787: 95.1967% ( 920) 00:15:13.277 1.787 - 1.800: 98.1062% ( 593) 00:15:13.277 1.800 - 1.813: 98.9648% ( 175) 00:15:13.277 1.813 - 1.827: 99.1708% ( 42) 00:15:13.277 1.827 - 1.840: 99.1954% ( 5) 00:15:13.277 1.840 - 1.853: 99.2101% ( 3) 00:15:13.277 1.853 - 1.867: 99.2248% ( 3) 00:15:13.277 1.867 - 1.880: 99.2297% ( 1) 00:15:13.277 1.947 - 1.960: 99.2346% ( 1) 00:15:13.277 1.973 - 1.987: 99.2395% ( 1) 00:15:13.277 1.987 - 2.000: 99.2444% ( 1) 00:15:13.277 2.000 - 2.013: 99.2493% ( 1) 00:15:13.277 2.013 - 2.027: 99.2542% ( 1) 00:15:13.277 2.027 - 2.040: 99.2592% ( 1) 00:15:13.277 2.040 - 2.053: 99.2641% ( 1) 00:15:13.277 2.053 - 2.067: 99.2690% ( 1) 00:15:13.277 2.067 - 2.080: 99.2788% ( 2) 00:15:13.277 2.080 - 2.093: 99.2837% ( 1) 00:15:13.277 2.093 - 2.107: 99.2935% ( 2) 00:15:13.277 2.107 - 2.120: 99.2984% ( 1) 00:15:13.277 2.120 - 2.133: 99.3180% ( 4) 00:15:13.277 2.133 - 2.147: 99.3278% ( 2) 00:15:13.277 2.147 - 2.160: 99.3475% ( 4) 00:15:13.277 2.160 - 2.173: 99.3573% ( 2) 00:15:13.277 2.173 - 2.187: 99.3769% ( 4) 00:15:13.277 2.187 - 2.200: 99.3867% ( 2) 00:15:13.277 2.200 - 2.213: 99.4014% ( 3) 00:15:13.277 2.213 - 2.227: 99.4063% ( 1) 00:15:13.277 2.227 - 2.240: 99.4112% ( 1) 00:15:13.277 2.240 - 2.253: 99.4162% ( 1) 00:15:13.277 2.253 - 2.267: 99.4211% ( 1) 00:15:13.277 2.267 - 2.280: 99.4260% ( 1) 00:15:13.277 2.307 - 2.320: 99.4309% ( 1) 00:15:13.277 2.560 - 2.573: 99.4358% ( 1) 00:15:13.277 3.320 - 3.333: 99.4407% ( 1) 00:15:13.277 3.707 - 3.733: 99.4456% ( 1) 00:15:13.277 3.760 - 3.787: 99.4505% ( 1) 00:15:13.277 3.893 - 3.920: 99.4554% ( 1) 00:15:13.277 3.973 - 4.000: 99.4603% ( 1) 00:15:13.277 4.000 - 4.027: 99.4652% ( 1) 00:15:13.277 4.213 - 4.240: 99.4701% ( 1) 00:15:13.277 4.320 - 4.347: 99.4750% ( 1) 00:15:13.277 4.453 - 4.480: 99.4799% ( 1) 00:15:13.278 4.507 - 4.533: 99.4848% ( 1) 00:15:13.278 4.587 - 4.613: 99.4897% ( 1) 00:15:13.278 4.667 - 4.693: 99.4947% ( 1) 00:15:13.278 4.720 - 4.747: 99.5045% ( 2) 00:15:13.278 4.747 - 4.773: 99.5094% ( 1) 00:15:13.278 4.773 - 4.800: 99.5143% ( 1) 00:15:13.278 4.800 - 4.827: 99.5192% ( 1) 00:15:13.278 4.827 - 4.853: 99.5241% ( 1) 00:15:13.278 4.987 - 5.013: 99.5339% ( 2) 00:15:13.278 5.013 - 5.040: 99.5388% ( 1) 00:15:13.278 5.120 - 5.147: 99.5437% ( 1) 00:15:13.278 5.147 - 5.173: 99.5486% ( 1) 00:15:13.278 5.200 - 5.227: 99.5535% ( 1) 00:15:13.278 5.227 - 5.253: 99.5633% ( 2) 00:15:13.278 5.360 - 5.387: 99.5682% ( 1) 00:15:13.278 5.440 - 5.4[2024-10-30 14:02:11.141366] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:13.278 67: 99.5732% ( 1) 00:15:13.278 5.520 - 5.547: 99.5781% ( 1) 00:15:13.278 5.573 - 5.600: 99.5879% ( 2) 00:15:13.278 5.653 - 5.680: 99.5928% ( 1) 00:15:13.278 5.733 - 5.760: 99.5977% ( 1) 00:15:13.278 5.813 - 5.840: 99.6075% ( 2) 00:15:13.278 5.947 - 5.973: 99.6124% ( 1) 00:15:13.278 6.107 - 6.133: 99.6173% ( 1) 00:15:13.278 6.187 - 6.213: 99.6222% ( 1) 00:15:13.278 6.213 - 6.240: 99.6271% ( 1) 00:15:13.278 6.773 - 6.800: 99.6320% ( 1) 00:15:13.278 7.200 - 7.253: 99.6418% ( 2) 00:15:13.278 7.947 - 8.000: 99.6467% ( 1) 00:15:13.278 10.880 - 10.933: 99.6517% ( 1) 00:15:13.278 11.307 - 11.360: 99.6566% ( 1) 00:15:13.278 11.733 - 11.787: 99.6615% ( 1) 00:15:13.278 13.867 - 13.973: 99.6664% ( 1) 00:15:13.278 34.773 - 34.987: 99.6713% ( 1) 00:15:13.278 3986.773 - 4014.080: 100.0000% ( 67) 00:15:13.278 00:15:13.278 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:13.278 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:13.278 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:13.278 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:13.278 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:13.278 [ 00:15:13.278 { 00:15:13.278 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:13.278 "subtype": "Discovery", 00:15:13.278 "listen_addresses": [], 00:15:13.278 "allow_any_host": true, 00:15:13.278 "hosts": [] 00:15:13.278 }, 00:15:13.278 { 00:15:13.278 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:13.278 "subtype": "NVMe", 00:15:13.278 "listen_addresses": [ 00:15:13.278 { 00:15:13.278 "trtype": "VFIOUSER", 00:15:13.278 "adrfam": "IPv4", 00:15:13.278 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:13.278 "trsvcid": "0" 00:15:13.278 } 00:15:13.278 ], 00:15:13.278 "allow_any_host": true, 00:15:13.278 "hosts": [], 00:15:13.278 "serial_number": "SPDK1", 00:15:13.278 "model_number": "SPDK bdev Controller", 00:15:13.278 "max_namespaces": 32, 00:15:13.278 "min_cntlid": 1, 00:15:13.278 "max_cntlid": 65519, 00:15:13.278 "namespaces": [ 00:15:13.278 { 00:15:13.278 "nsid": 1, 00:15:13.278 "bdev_name": "Malloc1", 00:15:13.278 "name": "Malloc1", 00:15:13.278 "nguid": "42055AD96F60415E9E8FC7CBCA2DAE7D", 00:15:13.278 "uuid": "42055ad9-6f60-415e-9e8f-c7cbca2dae7d" 00:15:13.278 }, 00:15:13.278 { 00:15:13.278 "nsid": 2, 00:15:13.278 "bdev_name": "Malloc3", 00:15:13.278 "name": "Malloc3", 00:15:13.278 "nguid": "779CACAAA5EB46F0A131792EDCC56D25", 00:15:13.278 "uuid": "779cacaa-a5eb-46f0-a131-792edcc56d25" 00:15:13.278 } 00:15:13.278 ] 00:15:13.278 }, 00:15:13.278 { 00:15:13.278 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:13.278 "subtype": "NVMe", 00:15:13.278 "listen_addresses": [ 00:15:13.278 { 00:15:13.278 "trtype": "VFIOUSER", 00:15:13.278 "adrfam": "IPv4", 00:15:13.278 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:13.278 "trsvcid": "0" 00:15:13.278 } 00:15:13.278 ], 00:15:13.278 "allow_any_host": true, 00:15:13.278 "hosts": [], 00:15:13.278 "serial_number": "SPDK2", 00:15:13.278 "model_number": "SPDK bdev Controller", 00:15:13.278 "max_namespaces": 32, 00:15:13.278 "min_cntlid": 1, 00:15:13.278 "max_cntlid": 65519, 00:15:13.278 "namespaces": [ 00:15:13.278 { 00:15:13.278 "nsid": 1, 00:15:13.278 "bdev_name": "Malloc2", 00:15:13.278 "name": "Malloc2", 00:15:13.278 "nguid": "95CB6DEE0D5244B98C89A29472A7D809", 00:15:13.278 "uuid": "95cb6dee-0d52-44b9-8c89-a29472a7d809" 00:15:13.278 } 00:15:13.278 ] 00:15:13.278 } 00:15:13.278 ] 00:15:13.278 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:13.278 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=984393 00:15:13.278 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:13.278 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:13.278 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:13.278 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:13.278 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:13.278 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:13.278 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:13.278 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:13.278 [2024-10-30 14:02:11.532052] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:13.278 Malloc4 00:15:13.278 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:13.540 [2024-10-30 14:02:11.718360] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:13.540 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:13.540 Asynchronous Event Request test 00:15:13.540 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:13.540 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:13.540 Registering asynchronous event callbacks... 00:15:13.540 Starting namespace attribute notice tests for all controllers... 00:15:13.540 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:13.540 aer_cb - Changed Namespace 00:15:13.540 Cleaning up... 00:15:13.801 [ 00:15:13.801 { 00:15:13.801 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:13.801 "subtype": "Discovery", 00:15:13.801 "listen_addresses": [], 00:15:13.801 "allow_any_host": true, 00:15:13.801 "hosts": [] 00:15:13.801 }, 00:15:13.801 { 00:15:13.801 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:13.801 "subtype": "NVMe", 00:15:13.801 "listen_addresses": [ 00:15:13.801 { 00:15:13.801 "trtype": "VFIOUSER", 00:15:13.801 "adrfam": "IPv4", 00:15:13.801 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:13.801 "trsvcid": "0" 00:15:13.801 } 00:15:13.801 ], 00:15:13.801 "allow_any_host": true, 00:15:13.801 "hosts": [], 00:15:13.801 "serial_number": "SPDK1", 00:15:13.801 "model_number": "SPDK bdev Controller", 00:15:13.801 "max_namespaces": 32, 00:15:13.801 "min_cntlid": 1, 00:15:13.801 "max_cntlid": 65519, 00:15:13.801 "namespaces": [ 00:15:13.801 { 00:15:13.801 "nsid": 1, 00:15:13.801 "bdev_name": "Malloc1", 00:15:13.801 "name": "Malloc1", 00:15:13.801 "nguid": "42055AD96F60415E9E8FC7CBCA2DAE7D", 00:15:13.801 "uuid": "42055ad9-6f60-415e-9e8f-c7cbca2dae7d" 00:15:13.801 }, 00:15:13.801 { 00:15:13.801 "nsid": 2, 00:15:13.801 "bdev_name": "Malloc3", 00:15:13.801 "name": "Malloc3", 00:15:13.801 "nguid": "779CACAAA5EB46F0A131792EDCC56D25", 00:15:13.801 "uuid": "779cacaa-a5eb-46f0-a131-792edcc56d25" 00:15:13.801 } 00:15:13.801 ] 00:15:13.801 }, 00:15:13.801 { 00:15:13.801 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:13.801 "subtype": "NVMe", 00:15:13.801 "listen_addresses": [ 00:15:13.801 { 00:15:13.801 "trtype": "VFIOUSER", 00:15:13.801 "adrfam": "IPv4", 00:15:13.801 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:13.801 "trsvcid": "0" 00:15:13.801 } 00:15:13.801 ], 00:15:13.801 "allow_any_host": true, 00:15:13.801 "hosts": [], 00:15:13.801 "serial_number": "SPDK2", 00:15:13.801 "model_number": "SPDK bdev Controller", 00:15:13.801 "max_namespaces": 32, 00:15:13.801 "min_cntlid": 1, 00:15:13.802 "max_cntlid": 65519, 00:15:13.802 "namespaces": [ 00:15:13.802 { 00:15:13.802 "nsid": 1, 00:15:13.802 "bdev_name": "Malloc2", 00:15:13.802 "name": "Malloc2", 00:15:13.802 "nguid": "95CB6DEE0D5244B98C89A29472A7D809", 00:15:13.802 "uuid": "95cb6dee-0d52-44b9-8c89-a29472a7d809" 00:15:13.802 }, 00:15:13.802 { 00:15:13.802 "nsid": 2, 00:15:13.802 "bdev_name": "Malloc4", 00:15:13.802 "name": "Malloc4", 00:15:13.802 "nguid": "7EFD320C5BB04BEB8A5944A449515637", 00:15:13.802 "uuid": "7efd320c-5bb0-4beb-8a59-44a449515637" 00:15:13.802 } 00:15:13.802 ] 00:15:13.802 } 00:15:13.802 ] 00:15:13.802 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 984393 00:15:13.802 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:13.802 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 975438 00:15:13.802 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 975438 ']' 00:15:13.802 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 975438 00:15:13.802 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:13.802 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:13.802 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 975438 00:15:13.802 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:13.802 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:13.802 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 975438' 00:15:13.802 killing process with pid 975438 00:15:13.802 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 975438 00:15:13.802 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 975438 00:15:14.063 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:14.063 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:14.063 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:14.063 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:14.063 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:14.063 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=984731 00:15:14.064 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 984731' 00:15:14.064 Process pid: 984731 00:15:14.064 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:14.064 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:14.064 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 984731 00:15:14.064 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 984731 ']' 00:15:14.064 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.064 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.064 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.064 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.064 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:14.064 [2024-10-30 14:02:12.171200] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:14.064 [2024-10-30 14:02:12.171910] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:15:14.064 [2024-10-30 14:02:12.171949] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.064 [2024-10-30 14:02:12.222301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:14.064 [2024-10-30 14:02:12.251239] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.064 [2024-10-30 14:02:12.251264] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.064 [2024-10-30 14:02:12.251269] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:14.064 [2024-10-30 14:02:12.251274] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:14.064 [2024-10-30 14:02:12.251278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.064 [2024-10-30 14:02:12.252464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.064 [2024-10-30 14:02:12.252617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:14.064 [2024-10-30 14:02:12.252779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.064 [2024-10-30 14:02:12.252780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:14.064 [2024-10-30 14:02:12.302999] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:14.064 [2024-10-30 14:02:12.304190] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:14.064 [2024-10-30 14:02:12.304860] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:14.064 [2024-10-30 14:02:12.305460] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:14.064 [2024-10-30 14:02:12.305473] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:14.064 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.064 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:14.064 14:02:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:15.449 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:15.449 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:15.449 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:15.449 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:15.449 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:15.449 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:15.449 Malloc1 00:15:15.711 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:15.711 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:15.972 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:16.233 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:16.233 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:16.233 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:16.233 Malloc2 00:15:16.493 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:16.493 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:16.754 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:17.016 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:17.016 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 984731 00:15:17.016 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 984731 ']' 00:15:17.016 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 984731 00:15:17.016 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:17.016 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:17.016 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 984731 00:15:17.016 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:17.016 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:17.016 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 984731' 00:15:17.016 killing process with pid 984731 00:15:17.016 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 984731 00:15:17.016 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 984731 00:15:17.016 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:17.016 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:17.016 00:15:17.016 real 0m50.339s 00:15:17.016 user 3m15.092s 00:15:17.016 sys 0m2.683s 00:15:17.016 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:17.016 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:17.016 ************************************ 00:15:17.016 END TEST nvmf_vfio_user 00:15:17.016 ************************************ 00:15:17.016 14:02:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:17.016 14:02:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:17.016 14:02:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:17.016 14:02:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:17.278 ************************************ 00:15:17.278 START TEST nvmf_vfio_user_nvme_compliance 00:15:17.278 ************************************ 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:17.279 * Looking for test storage... 00:15:17.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:17.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.279 --rc genhtml_branch_coverage=1 00:15:17.279 --rc genhtml_function_coverage=1 00:15:17.279 --rc genhtml_legend=1 00:15:17.279 --rc geninfo_all_blocks=1 00:15:17.279 --rc geninfo_unexecuted_blocks=1 00:15:17.279 00:15:17.279 ' 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:17.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.279 --rc genhtml_branch_coverage=1 00:15:17.279 --rc genhtml_function_coverage=1 00:15:17.279 --rc genhtml_legend=1 00:15:17.279 --rc geninfo_all_blocks=1 00:15:17.279 --rc geninfo_unexecuted_blocks=1 00:15:17.279 00:15:17.279 ' 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:17.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.279 --rc genhtml_branch_coverage=1 00:15:17.279 --rc genhtml_function_coverage=1 00:15:17.279 --rc genhtml_legend=1 00:15:17.279 --rc geninfo_all_blocks=1 00:15:17.279 --rc geninfo_unexecuted_blocks=1 00:15:17.279 00:15:17.279 ' 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:17.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.279 --rc genhtml_branch_coverage=1 00:15:17.279 --rc genhtml_function_coverage=1 00:15:17.279 --rc genhtml_legend=1 00:15:17.279 --rc geninfo_all_blocks=1 00:15:17.279 --rc geninfo_unexecuted_blocks=1 00:15:17.279 00:15:17.279 ' 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:17.279 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:17.541 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.541 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.541 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.541 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.541 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.541 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:17.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=985470 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 985470' 00:15:17.542 Process pid: 985470 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 985470 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 985470 ']' 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.542 14:02:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:17.542 [2024-10-30 14:02:15.651151] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:15:17.542 [2024-10-30 14:02:15.651227] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.542 [2024-10-30 14:02:15.737769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:17.542 [2024-10-30 14:02:15.771906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.542 [2024-10-30 14:02:15.771938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.542 [2024-10-30 14:02:15.771944] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.542 [2024-10-30 14:02:15.771949] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.542 [2024-10-30 14:02:15.771953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.542 [2024-10-30 14:02:15.773149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.542 [2024-10-30 14:02:15.773307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.542 [2024-10-30 14:02:15.773309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.483 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.483 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:18.483 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:19.425 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:19.425 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:19.425 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:19.425 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.425 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:19.425 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.425 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:19.425 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:19.425 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.425 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:19.425 malloc0 00:15:19.425 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.425 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:19.425 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.425 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:19.425 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.425 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:19.425 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.425 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:19.425 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.425 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:19.425 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.425 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:19.425 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.425 14:02:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:19.425 00:15:19.425 00:15:19.425 CUnit - A unit testing framework for C - Version 2.1-3 00:15:19.425 http://cunit.sourceforge.net/ 00:15:19.425 00:15:19.425 00:15:19.425 Suite: nvme_compliance 00:15:19.425 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-30 14:02:17.698133] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.425 [2024-10-30 14:02:17.699421] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:19.425 [2024-10-30 14:02:17.699432] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:19.425 [2024-10-30 14:02:17.699437] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:19.425 [2024-10-30 14:02:17.701157] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.686 passed 00:15:19.686 Test: admin_identify_ctrlr_verify_fused ...[2024-10-30 14:02:17.777626] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.686 [2024-10-30 14:02:17.782658] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.686 passed 00:15:19.686 Test: admin_identify_ns ...[2024-10-30 14:02:17.860114] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.686 [2024-10-30 14:02:17.920754] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:19.686 [2024-10-30 14:02:17.928759] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:19.686 [2024-10-30 14:02:17.949833] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.686 passed 00:15:19.947 Test: admin_get_features_mandatory_features ...[2024-10-30 14:02:18.024053] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.947 [2024-10-30 14:02:18.027072] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.947 passed 00:15:19.947 Test: admin_get_features_optional_features ...[2024-10-30 14:02:18.101519] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.947 [2024-10-30 14:02:18.104536] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.947 passed 00:15:19.947 Test: admin_set_features_number_of_queues ...[2024-10-30 14:02:18.180164] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.207 [2024-10-30 14:02:18.284835] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.207 passed 00:15:20.207 Test: admin_get_log_page_mandatory_logs ...[2024-10-30 14:02:18.360885] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.207 [2024-10-30 14:02:18.363900] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.207 passed 00:15:20.207 Test: admin_get_log_page_with_lpo ...[2024-10-30 14:02:18.437631] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.207 [2024-10-30 14:02:18.505758] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:20.467 [2024-10-30 14:02:18.518804] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.467 passed 00:15:20.467 Test: fabric_property_get ...[2024-10-30 14:02:18.592045] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.467 [2024-10-30 14:02:18.593242] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:20.467 [2024-10-30 14:02:18.595061] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.467 passed 00:15:20.467 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-30 14:02:18.671539] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.467 [2024-10-30 14:02:18.672735] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:20.467 [2024-10-30 14:02:18.674564] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.467 passed 00:15:20.467 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-30 14:02:18.751442] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.728 [2024-10-30 14:02:18.835754] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:20.728 [2024-10-30 14:02:18.851760] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:20.728 [2024-10-30 14:02:18.856829] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.728 passed 00:15:20.728 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-30 14:02:18.931075] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.728 [2024-10-30 14:02:18.932271] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:20.728 [2024-10-30 14:02:18.934106] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.728 passed 00:15:20.728 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-30 14:02:19.008855] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.988 [2024-10-30 14:02:19.086757] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:20.988 [2024-10-30 14:02:19.110754] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:20.988 [2024-10-30 14:02:19.115819] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.988 passed 00:15:20.988 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-30 14:02:19.189983] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.988 [2024-10-30 14:02:19.191178] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:20.988 [2024-10-30 14:02:19.191195] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:20.988 [2024-10-30 14:02:19.192998] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.988 passed 00:15:20.988 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-30 14:02:19.267702] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:21.248 [2024-10-30 14:02:19.359751] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:21.248 [2024-10-30 14:02:19.367750] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:21.248 [2024-10-30 14:02:19.375749] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:21.248 [2024-10-30 14:02:19.383753] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:21.248 [2024-10-30 14:02:19.412815] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:21.248 passed 00:15:21.248 Test: admin_create_io_sq_verify_pc ...[2024-10-30 14:02:19.485978] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:21.248 [2024-10-30 14:02:19.502760] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:21.248 [2024-10-30 14:02:19.520120] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:21.248 passed 00:15:21.507 Test: admin_create_io_qp_max_qps ...[2024-10-30 14:02:19.594548] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.450 [2024-10-30 14:02:20.714756] nvme_ctrlr.c:5487:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:23.022 [2024-10-30 14:02:21.118983] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.022 passed 00:15:23.022 Test: admin_create_io_sq_shared_cq ...[2024-10-30 14:02:21.193101] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:23.284 [2024-10-30 14:02:21.328754] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:23.284 [2024-10-30 14:02:21.363819] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.284 passed 00:15:23.284 00:15:23.284 Run Summary: Type Total Ran Passed Failed Inactive 00:15:23.284 suites 1 1 n/a 0 0 00:15:23.284 tests 18 18 18 0 0 00:15:23.284 asserts 360 360 360 0 n/a 00:15:23.284 00:15:23.284 Elapsed time = 1.508 seconds 00:15:23.284 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 985470 00:15:23.284 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 985470 ']' 00:15:23.284 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 985470 00:15:23.284 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:23.284 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.284 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 985470 00:15:23.284 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:23.284 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:23.284 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 985470' 00:15:23.284 killing process with pid 985470 00:15:23.284 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 985470 00:15:23.284 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 985470 00:15:23.284 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:23.546 00:15:23.546 real 0m6.234s 00:15:23.546 user 0m17.652s 00:15:23.546 sys 0m0.553s 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:23.546 ************************************ 00:15:23.546 END TEST nvmf_vfio_user_nvme_compliance 00:15:23.546 ************************************ 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:23.546 ************************************ 00:15:23.546 START TEST nvmf_vfio_user_fuzz 00:15:23.546 ************************************ 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:23.546 * Looking for test storage... 00:15:23.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:23.546 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:23.808 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:23.808 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:23.808 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:23.808 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:23.808 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:23.808 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:23.808 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:23.808 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:23.808 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:23.808 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:23.808 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:23.808 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:23.808 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:23.808 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:23.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.808 --rc genhtml_branch_coverage=1 00:15:23.809 --rc genhtml_function_coverage=1 00:15:23.809 --rc genhtml_legend=1 00:15:23.809 --rc geninfo_all_blocks=1 00:15:23.809 --rc geninfo_unexecuted_blocks=1 00:15:23.809 00:15:23.809 ' 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:23.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.809 --rc genhtml_branch_coverage=1 00:15:23.809 --rc genhtml_function_coverage=1 00:15:23.809 --rc genhtml_legend=1 00:15:23.809 --rc geninfo_all_blocks=1 00:15:23.809 --rc geninfo_unexecuted_blocks=1 00:15:23.809 00:15:23.809 ' 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:23.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.809 --rc genhtml_branch_coverage=1 00:15:23.809 --rc genhtml_function_coverage=1 00:15:23.809 --rc genhtml_legend=1 00:15:23.809 --rc geninfo_all_blocks=1 00:15:23.809 --rc geninfo_unexecuted_blocks=1 00:15:23.809 00:15:23.809 ' 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:23.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.809 --rc genhtml_branch_coverage=1 00:15:23.809 --rc genhtml_function_coverage=1 00:15:23.809 --rc genhtml_legend=1 00:15:23.809 --rc geninfo_all_blocks=1 00:15:23.809 --rc geninfo_unexecuted_blocks=1 00:15:23.809 00:15:23.809 ' 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:23.809 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=986588 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 986588' 00:15:23.809 Process pid: 986588 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 986588 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 986588 ']' 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.809 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.810 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.810 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:24.751 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.751 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:24.751 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:25.693 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:25.693 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.693 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:25.693 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.693 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:25.693 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:25.693 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.693 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:25.693 malloc0 00:15:25.693 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.693 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:25.693 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.693 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:25.693 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.693 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:25.693 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.693 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:25.693 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.693 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:25.693 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.693 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:25.693 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.693 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:25.693 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:57.810 Fuzzing completed. Shutting down the fuzz application 00:15:57.810 00:15:57.810 Dumping successful admin opcodes: 00:15:57.810 8, 9, 10, 24, 00:15:57.810 Dumping successful io opcodes: 00:15:57.810 0, 00:15:57.810 NS: 0x20000081ef00 I/O qp, Total commands completed: 1163906, total successful commands: 4578, random_seed: 1030639296 00:15:57.810 NS: 0x20000081ef00 admin qp, Total commands completed: 196540, total successful commands: 1571, random_seed: 310135424 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 986588 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 986588 ']' 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 986588 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 986588 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 986588' 00:15:57.810 killing process with pid 986588 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 986588 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 986588 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:57.810 00:15:57.810 real 0m32.763s 00:15:57.810 user 0m39.316s 00:15:57.810 sys 0m22.498s 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:57.810 ************************************ 00:15:57.810 END TEST nvmf_vfio_user_fuzz 00:15:57.810 ************************************ 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:57.810 ************************************ 00:15:57.810 START TEST nvmf_auth_target 00:15:57.810 ************************************ 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:57.810 * Looking for test storage... 00:15:57.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:57.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.810 --rc genhtml_branch_coverage=1 00:15:57.810 --rc genhtml_function_coverage=1 00:15:57.810 --rc genhtml_legend=1 00:15:57.810 --rc geninfo_all_blocks=1 00:15:57.810 --rc geninfo_unexecuted_blocks=1 00:15:57.810 00:15:57.810 ' 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:57.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.810 --rc genhtml_branch_coverage=1 00:15:57.810 --rc genhtml_function_coverage=1 00:15:57.810 --rc genhtml_legend=1 00:15:57.810 --rc geninfo_all_blocks=1 00:15:57.810 --rc geninfo_unexecuted_blocks=1 00:15:57.810 00:15:57.810 ' 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:57.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.810 --rc genhtml_branch_coverage=1 00:15:57.810 --rc genhtml_function_coverage=1 00:15:57.810 --rc genhtml_legend=1 00:15:57.810 --rc geninfo_all_blocks=1 00:15:57.810 --rc geninfo_unexecuted_blocks=1 00:15:57.810 00:15:57.810 ' 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:57.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.810 --rc genhtml_branch_coverage=1 00:15:57.810 --rc genhtml_function_coverage=1 00:15:57.810 --rc genhtml_legend=1 00:15:57.810 --rc geninfo_all_blocks=1 00:15:57.810 --rc geninfo_unexecuted_blocks=1 00:15:57.810 00:15:57.810 ' 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.810 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:57.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:57.811 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:04.405 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:04.405 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:04.405 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:04.405 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:04.405 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:04.406 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:04.406 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:04.406 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:04.406 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:04.406 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:04.406 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:04.406 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:04.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:04.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:16:04.406 00:16:04.406 --- 10.0.0.2 ping statistics --- 00:16:04.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.406 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:04.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:04.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:16:04.406 00:16:04.406 --- 10.0.0.1 ping statistics --- 00:16:04.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.406 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=996774 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 996774 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 996774 ']' 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:04.406 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.981 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:04.981 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:04.981 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:04.981 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:04.981 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.981 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.981 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=997000 00:16:04.981 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:04.981 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:04.981 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:04.981 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:04.981 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:04.981 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:04.981 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:04.981 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:04.981 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:04.981 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a250d534cb24dd84bd320c33503728b63cf6ecf4c3a1207b 00:16:04.981 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:04.981 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.DbU 00:16:04.981 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a250d534cb24dd84bd320c33503728b63cf6ecf4c3a1207b 0 00:16:04.981 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a250d534cb24dd84bd320c33503728b63cf6ecf4c3a1207b 0 00:16:04.981 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:04.981 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:04.982 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a250d534cb24dd84bd320c33503728b63cf6ecf4c3a1207b 00:16:04.982 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:04.982 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:04.982 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.DbU 00:16:04.982 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.DbU 00:16:04.982 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.DbU 00:16:04.982 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:04.982 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:04.982 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:04.982 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:04.982 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:04.982 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:04.982 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:04.982 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a6505d5920388dd5313977acb12b2ec142f2c044161d710d9f63e49db8b80ad2 00:16:04.982 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:04.982 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.oOe 00:16:04.982 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a6505d5920388dd5313977acb12b2ec142f2c044161d710d9f63e49db8b80ad2 3 00:16:04.982 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a6505d5920388dd5313977acb12b2ec142f2c044161d710d9f63e49db8b80ad2 3 00:16:04.982 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:04.982 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:04.982 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a6505d5920388dd5313977acb12b2ec142f2c044161d710d9f63e49db8b80ad2 00:16:04.982 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:04.982 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.oOe 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.oOe 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.oOe 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=24e0e2388f3ce6e77c73acf9158c7c47 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.qye 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 24e0e2388f3ce6e77c73acf9158c7c47 1 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 24e0e2388f3ce6e77c73acf9158c7c47 1 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=24e0e2388f3ce6e77c73acf9158c7c47 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.qye 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.qye 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.qye 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f393521ffa8aff77b878ccadd40eb5c794209f02e0b8024e 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.b15 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f393521ffa8aff77b878ccadd40eb5c794209f02e0b8024e 2 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f393521ffa8aff77b878ccadd40eb5c794209f02e0b8024e 2 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:05.245 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f393521ffa8aff77b878ccadd40eb5c794209f02e0b8024e 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.b15 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.b15 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.b15 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=33ee7977bd8708164684a0fd84be1e43f9c66f554c53d9d8 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.fxf 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 33ee7977bd8708164684a0fd84be1e43f9c66f554c53d9d8 2 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 33ee7977bd8708164684a0fd84be1e43f9c66f554c53d9d8 2 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=33ee7977bd8708164684a0fd84be1e43f9c66f554c53d9d8 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.fxf 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.fxf 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.fxf 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=20dd472c9cd3e4b13f196b008769b920 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ak4 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 20dd472c9cd3e4b13f196b008769b920 1 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 20dd472c9cd3e4b13f196b008769b920 1 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=20dd472c9cd3e4b13f196b008769b920 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:05.246 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ak4 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ak4 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.ak4 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2501932303ecd60d3309be0910334ecc90f1861280f74b1a9a6e96d07f35cf8f 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.edW 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2501932303ecd60d3309be0910334ecc90f1861280f74b1a9a6e96d07f35cf8f 3 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2501932303ecd60d3309be0910334ecc90f1861280f74b1a9a6e96d07f35cf8f 3 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2501932303ecd60d3309be0910334ecc90f1861280f74b1a9a6e96d07f35cf8f 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.edW 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.edW 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.edW 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 996774 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 996774 ']' 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.508 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.770 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.770 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:05.770 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 997000 /var/tmp/host.sock 00:16:05.770 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 997000 ']' 00:16:05.770 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:05.770 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.770 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:05.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:05.770 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.770 14:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.770 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.770 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:05.770 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:05.770 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.770 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.033 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.033 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:06.033 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DbU 00:16:06.033 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.033 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.033 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.033 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.DbU 00:16:06.033 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.DbU 00:16:06.033 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.oOe ]] 00:16:06.033 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oOe 00:16:06.033 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.033 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.033 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.033 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oOe 00:16:06.033 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oOe 00:16:06.295 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:06.295 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.qye 00:16:06.295 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.295 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.295 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.295 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.qye 00:16:06.295 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.qye 00:16:06.556 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.b15 ]] 00:16:06.556 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.b15 00:16:06.557 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.557 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.557 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.557 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.b15 00:16:06.557 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.b15 00:16:06.817 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:06.817 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.fxf 00:16:06.817 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.817 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.818 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.818 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.fxf 00:16:06.818 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.fxf 00:16:07.080 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.ak4 ]] 00:16:07.080 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ak4 00:16:07.080 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.080 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.080 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.080 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ak4 00:16:07.080 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ak4 00:16:07.080 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:07.080 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.edW 00:16:07.080 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.080 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.080 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.080 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.edW 00:16:07.080 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.edW 00:16:07.341 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:07.341 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:07.341 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:07.341 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.341 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:07.341 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:07.603 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:07.603 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.604 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:07.604 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:07.604 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:07.604 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.604 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.604 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.604 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.604 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.604 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.604 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.604 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.865 00:16:07.865 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.865 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.865 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.128 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.128 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.128 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.128 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.128 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.128 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.128 { 00:16:08.128 "cntlid": 1, 00:16:08.128 "qid": 0, 00:16:08.128 "state": "enabled", 00:16:08.128 "thread": "nvmf_tgt_poll_group_000", 00:16:08.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:08.128 "listen_address": { 00:16:08.128 "trtype": "TCP", 00:16:08.128 "adrfam": "IPv4", 00:16:08.128 "traddr": "10.0.0.2", 00:16:08.128 "trsvcid": "4420" 00:16:08.128 }, 00:16:08.128 "peer_address": { 00:16:08.128 "trtype": "TCP", 00:16:08.128 "adrfam": "IPv4", 00:16:08.128 "traddr": "10.0.0.1", 00:16:08.128 "trsvcid": "54562" 00:16:08.128 }, 00:16:08.128 "auth": { 00:16:08.128 "state": "completed", 00:16:08.128 "digest": "sha256", 00:16:08.128 "dhgroup": "null" 00:16:08.128 } 00:16:08.128 } 00:16:08.128 ]' 00:16:08.128 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.128 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:08.128 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.128 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:08.128 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.128 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.128 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.128 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.389 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:16:08.389 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:16:08.960 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.960 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:08.960 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.960 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.960 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.960 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.960 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:08.960 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:09.221 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:09.221 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.221 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:09.221 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:09.221 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:09.221 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.221 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.221 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.221 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.221 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.221 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.221 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.221 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.482 00:16:09.482 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.482 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.482 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.482 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.482 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.482 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.743 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.743 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.743 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.743 { 00:16:09.743 "cntlid": 3, 00:16:09.743 "qid": 0, 00:16:09.743 "state": "enabled", 00:16:09.743 "thread": "nvmf_tgt_poll_group_000", 00:16:09.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:09.743 "listen_address": { 00:16:09.743 "trtype": "TCP", 00:16:09.743 "adrfam": "IPv4", 00:16:09.743 "traddr": "10.0.0.2", 00:16:09.743 "trsvcid": "4420" 00:16:09.743 }, 00:16:09.743 "peer_address": { 00:16:09.743 "trtype": "TCP", 00:16:09.743 "adrfam": "IPv4", 00:16:09.743 "traddr": "10.0.0.1", 00:16:09.743 "trsvcid": "54588" 00:16:09.743 }, 00:16:09.743 "auth": { 00:16:09.743 "state": "completed", 00:16:09.743 "digest": "sha256", 00:16:09.743 "dhgroup": "null" 00:16:09.743 } 00:16:09.743 } 00:16:09.743 ]' 00:16:09.743 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.743 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.743 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.743 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:09.743 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.743 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.743 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.743 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.005 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:16:10.005 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:16:10.576 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.576 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:10.576 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.576 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.576 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.576 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.576 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:10.576 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:10.836 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:10.836 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.836 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:10.836 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:10.836 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:10.836 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.836 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.836 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.836 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.836 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.836 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.836 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.836 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.097 00:16:11.097 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.097 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.097 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.097 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.097 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.097 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.097 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.097 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.097 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.097 { 00:16:11.097 "cntlid": 5, 00:16:11.097 "qid": 0, 00:16:11.097 "state": "enabled", 00:16:11.097 "thread": "nvmf_tgt_poll_group_000", 00:16:11.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:11.097 "listen_address": { 00:16:11.097 "trtype": "TCP", 00:16:11.097 "adrfam": "IPv4", 00:16:11.097 "traddr": "10.0.0.2", 00:16:11.097 "trsvcid": "4420" 00:16:11.097 }, 00:16:11.097 "peer_address": { 00:16:11.097 "trtype": "TCP", 00:16:11.097 "adrfam": "IPv4", 00:16:11.097 "traddr": "10.0.0.1", 00:16:11.097 "trsvcid": "35910" 00:16:11.097 }, 00:16:11.097 "auth": { 00:16:11.097 "state": "completed", 00:16:11.097 "digest": "sha256", 00:16:11.097 "dhgroup": "null" 00:16:11.097 } 00:16:11.097 } 00:16:11.097 ]' 00:16:11.097 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.357 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.357 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.357 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:11.357 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.357 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.357 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.357 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.615 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:16:11.615 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:16:12.184 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.184 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:12.184 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.184 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.185 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.185 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.185 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:12.185 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:12.445 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:12.445 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.445 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:12.445 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:12.445 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:12.445 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.445 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:12.445 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.445 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.445 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.445 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:12.445 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.445 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.445 00:16:12.707 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.707 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.707 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.707 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.707 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.707 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.707 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.707 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.707 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.707 { 00:16:12.707 "cntlid": 7, 00:16:12.707 "qid": 0, 00:16:12.707 "state": "enabled", 00:16:12.707 "thread": "nvmf_tgt_poll_group_000", 00:16:12.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:12.707 "listen_address": { 00:16:12.707 "trtype": "TCP", 00:16:12.707 "adrfam": "IPv4", 00:16:12.707 "traddr": "10.0.0.2", 00:16:12.707 "trsvcid": "4420" 00:16:12.707 }, 00:16:12.707 "peer_address": { 00:16:12.707 "trtype": "TCP", 00:16:12.707 "adrfam": "IPv4", 00:16:12.707 "traddr": "10.0.0.1", 00:16:12.707 "trsvcid": "35938" 00:16:12.707 }, 00:16:12.707 "auth": { 00:16:12.707 "state": "completed", 00:16:12.707 "digest": "sha256", 00:16:12.707 "dhgroup": "null" 00:16:12.707 } 00:16:12.707 } 00:16:12.707 ]' 00:16:12.707 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.707 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.707 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.968 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:12.968 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.968 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.968 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.968 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.968 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:16:12.968 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:16:13.909 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.909 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:13.909 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.909 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.909 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.909 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.909 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.909 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:13.909 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:13.909 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:13.909 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.909 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:13.909 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:13.909 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:13.909 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.909 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.909 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.909 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.909 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.909 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.909 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.909 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.170 00:16:14.170 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.170 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.170 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.432 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.432 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.432 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.432 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.432 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.432 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.432 { 00:16:14.432 "cntlid": 9, 00:16:14.432 "qid": 0, 00:16:14.432 "state": "enabled", 00:16:14.432 "thread": "nvmf_tgt_poll_group_000", 00:16:14.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:14.432 "listen_address": { 00:16:14.432 "trtype": "TCP", 00:16:14.432 "adrfam": "IPv4", 00:16:14.432 "traddr": "10.0.0.2", 00:16:14.432 "trsvcid": "4420" 00:16:14.432 }, 00:16:14.432 "peer_address": { 00:16:14.432 "trtype": "TCP", 00:16:14.432 "adrfam": "IPv4", 00:16:14.432 "traddr": "10.0.0.1", 00:16:14.432 "trsvcid": "35962" 00:16:14.432 }, 00:16:14.432 "auth": { 00:16:14.432 "state": "completed", 00:16:14.432 "digest": "sha256", 00:16:14.432 "dhgroup": "ffdhe2048" 00:16:14.432 } 00:16:14.432 } 00:16:14.432 ]' 00:16:14.432 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.432 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.432 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.432 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:14.432 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.432 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.432 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.432 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.694 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:16:14.694 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:16:15.266 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.266 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:15.266 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.266 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.266 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.266 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.266 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:15.266 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:15.527 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:15.527 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.527 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:15.527 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:15.527 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:15.527 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.527 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.527 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.527 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.527 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.527 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.527 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.527 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.788 00:16:15.788 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.788 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.788 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.050 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.050 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.050 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.050 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.050 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.050 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.050 { 00:16:16.050 "cntlid": 11, 00:16:16.050 "qid": 0, 00:16:16.050 "state": "enabled", 00:16:16.050 "thread": "nvmf_tgt_poll_group_000", 00:16:16.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:16.050 "listen_address": { 00:16:16.050 "trtype": "TCP", 00:16:16.050 "adrfam": "IPv4", 00:16:16.050 "traddr": "10.0.0.2", 00:16:16.050 "trsvcid": "4420" 00:16:16.050 }, 00:16:16.050 "peer_address": { 00:16:16.050 "trtype": "TCP", 00:16:16.050 "adrfam": "IPv4", 00:16:16.050 "traddr": "10.0.0.1", 00:16:16.050 "trsvcid": "35988" 00:16:16.050 }, 00:16:16.050 "auth": { 00:16:16.050 "state": "completed", 00:16:16.050 "digest": "sha256", 00:16:16.050 "dhgroup": "ffdhe2048" 00:16:16.050 } 00:16:16.050 } 00:16:16.050 ]' 00:16:16.050 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.050 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.050 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.050 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:16.050 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.050 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.050 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.050 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.312 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:16:16.313 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:16:16.885 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.885 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:16.885 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.885 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.885 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.885 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.885 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:16.885 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:17.146 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:17.146 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.146 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:17.146 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:17.146 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:17.146 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.146 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.146 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.146 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.146 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.146 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.146 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.146 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.406 00:16:17.406 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.406 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.406 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.668 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.668 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.668 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.668 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.668 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.668 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.668 { 00:16:17.668 "cntlid": 13, 00:16:17.668 "qid": 0, 00:16:17.668 "state": "enabled", 00:16:17.668 "thread": "nvmf_tgt_poll_group_000", 00:16:17.668 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:17.668 "listen_address": { 00:16:17.668 "trtype": "TCP", 00:16:17.668 "adrfam": "IPv4", 00:16:17.668 "traddr": "10.0.0.2", 00:16:17.668 "trsvcid": "4420" 00:16:17.668 }, 00:16:17.668 "peer_address": { 00:16:17.668 "trtype": "TCP", 00:16:17.668 "adrfam": "IPv4", 00:16:17.668 "traddr": "10.0.0.1", 00:16:17.668 "trsvcid": "36002" 00:16:17.668 }, 00:16:17.668 "auth": { 00:16:17.668 "state": "completed", 00:16:17.668 "digest": "sha256", 00:16:17.668 "dhgroup": "ffdhe2048" 00:16:17.668 } 00:16:17.668 } 00:16:17.668 ]' 00:16:17.668 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.668 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.668 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.668 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:17.668 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.668 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.668 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.668 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.930 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:16:17.930 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:16:18.501 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.501 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:18.501 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.501 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.501 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.501 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.501 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:18.501 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:18.762 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:18.762 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.762 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:18.762 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:18.762 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:18.762 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.762 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:18.762 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.762 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.762 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.762 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:18.762 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.762 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:19.022 00:16:19.022 14:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.022 14:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.022 14:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.284 14:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.284 14:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.284 14:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.284 14:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.284 14:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.284 14:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.284 { 00:16:19.284 "cntlid": 15, 00:16:19.284 "qid": 0, 00:16:19.284 "state": "enabled", 00:16:19.284 "thread": "nvmf_tgt_poll_group_000", 00:16:19.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:19.284 "listen_address": { 00:16:19.284 "trtype": "TCP", 00:16:19.284 "adrfam": "IPv4", 00:16:19.284 "traddr": "10.0.0.2", 00:16:19.284 "trsvcid": "4420" 00:16:19.284 }, 00:16:19.284 "peer_address": { 00:16:19.284 "trtype": "TCP", 00:16:19.284 "adrfam": "IPv4", 00:16:19.284 "traddr": "10.0.0.1", 00:16:19.284 "trsvcid": "36028" 00:16:19.284 }, 00:16:19.284 "auth": { 00:16:19.284 "state": "completed", 00:16:19.284 "digest": "sha256", 00:16:19.284 "dhgroup": "ffdhe2048" 00:16:19.284 } 00:16:19.284 } 00:16:19.284 ]' 00:16:19.284 14:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.284 14:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.284 14:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.284 14:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:19.284 14:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.284 14:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.284 14:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.284 14:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.545 14:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:16:19.545 14:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:16:20.117 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.117 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:20.117 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.117 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.117 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.117 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.117 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.117 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:20.117 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:20.379 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:20.379 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.379 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:20.379 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:20.379 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:20.379 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.379 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.379 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.379 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.379 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.379 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.379 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.379 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.640 00:16:20.640 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.640 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.640 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.640 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.640 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.640 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.640 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.640 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.640 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.640 { 00:16:20.640 "cntlid": 17, 00:16:20.640 "qid": 0, 00:16:20.640 "state": "enabled", 00:16:20.640 "thread": "nvmf_tgt_poll_group_000", 00:16:20.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:20.640 "listen_address": { 00:16:20.640 "trtype": "TCP", 00:16:20.640 "adrfam": "IPv4", 00:16:20.640 "traddr": "10.0.0.2", 00:16:20.640 "trsvcid": "4420" 00:16:20.640 }, 00:16:20.640 "peer_address": { 00:16:20.640 "trtype": "TCP", 00:16:20.640 "adrfam": "IPv4", 00:16:20.640 "traddr": "10.0.0.1", 00:16:20.640 "trsvcid": "38476" 00:16:20.640 }, 00:16:20.640 "auth": { 00:16:20.640 "state": "completed", 00:16:20.640 "digest": "sha256", 00:16:20.640 "dhgroup": "ffdhe3072" 00:16:20.640 } 00:16:20.640 } 00:16:20.640 ]' 00:16:20.640 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.902 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.902 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.902 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:20.902 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.902 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.902 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.902 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.163 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:16:21.163 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:16:21.735 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.735 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:21.735 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.735 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.735 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.735 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.735 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:21.735 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:21.995 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:21.995 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.995 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:21.995 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:21.995 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:21.995 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.995 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.995 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.995 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.995 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.995 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.995 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.995 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.257 00:16:22.257 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.257 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.257 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.257 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.257 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.257 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.257 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.257 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.257 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.257 { 00:16:22.257 "cntlid": 19, 00:16:22.257 "qid": 0, 00:16:22.257 "state": "enabled", 00:16:22.257 "thread": "nvmf_tgt_poll_group_000", 00:16:22.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:22.257 "listen_address": { 00:16:22.257 "trtype": "TCP", 00:16:22.257 "adrfam": "IPv4", 00:16:22.257 "traddr": "10.0.0.2", 00:16:22.257 "trsvcid": "4420" 00:16:22.257 }, 00:16:22.257 "peer_address": { 00:16:22.257 "trtype": "TCP", 00:16:22.257 "adrfam": "IPv4", 00:16:22.257 "traddr": "10.0.0.1", 00:16:22.257 "trsvcid": "38510" 00:16:22.257 }, 00:16:22.257 "auth": { 00:16:22.257 "state": "completed", 00:16:22.257 "digest": "sha256", 00:16:22.257 "dhgroup": "ffdhe3072" 00:16:22.257 } 00:16:22.257 } 00:16:22.257 ]' 00:16:22.257 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.518 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.518 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.518 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:22.518 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.518 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.518 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.518 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.780 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:16:22.780 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:16:23.353 14:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.353 14:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:23.353 14:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.353 14:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.353 14:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.353 14:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.353 14:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:23.353 14:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:23.614 14:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:23.614 14:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.614 14:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:23.614 14:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:23.614 14:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:23.614 14:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.614 14:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.614 14:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.614 14:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.614 14:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.614 14:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.614 14:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.614 14:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.875 00:16:23.875 14:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.875 14:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.875 14:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.875 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.875 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.875 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.875 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.136 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.136 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.136 { 00:16:24.136 "cntlid": 21, 00:16:24.136 "qid": 0, 00:16:24.136 "state": "enabled", 00:16:24.136 "thread": "nvmf_tgt_poll_group_000", 00:16:24.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:24.136 "listen_address": { 00:16:24.136 "trtype": "TCP", 00:16:24.136 "adrfam": "IPv4", 00:16:24.136 "traddr": "10.0.0.2", 00:16:24.136 "trsvcid": "4420" 00:16:24.136 }, 00:16:24.136 "peer_address": { 00:16:24.136 "trtype": "TCP", 00:16:24.136 "adrfam": "IPv4", 00:16:24.136 "traddr": "10.0.0.1", 00:16:24.136 "trsvcid": "38538" 00:16:24.136 }, 00:16:24.136 "auth": { 00:16:24.136 "state": "completed", 00:16:24.136 "digest": "sha256", 00:16:24.136 "dhgroup": "ffdhe3072" 00:16:24.136 } 00:16:24.136 } 00:16:24.136 ]' 00:16:24.136 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.136 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.136 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.136 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:24.136 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.136 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.136 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.136 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.396 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:16:24.396 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:16:24.969 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.969 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:24.969 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.969 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.969 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.969 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.969 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:24.969 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:25.230 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:25.230 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.230 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:25.230 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:25.230 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:25.230 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.230 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:25.231 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.231 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.231 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.231 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:25.231 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.231 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.492 00:16:25.492 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.492 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.492 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.492 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.492 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.492 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.492 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.753 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.753 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.753 { 00:16:25.753 "cntlid": 23, 00:16:25.753 "qid": 0, 00:16:25.753 "state": "enabled", 00:16:25.753 "thread": "nvmf_tgt_poll_group_000", 00:16:25.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:25.753 "listen_address": { 00:16:25.753 "trtype": "TCP", 00:16:25.753 "adrfam": "IPv4", 00:16:25.753 "traddr": "10.0.0.2", 00:16:25.753 "trsvcid": "4420" 00:16:25.753 }, 00:16:25.753 "peer_address": { 00:16:25.753 "trtype": "TCP", 00:16:25.753 "adrfam": "IPv4", 00:16:25.753 "traddr": "10.0.0.1", 00:16:25.753 "trsvcid": "38572" 00:16:25.753 }, 00:16:25.753 "auth": { 00:16:25.753 "state": "completed", 00:16:25.753 "digest": "sha256", 00:16:25.753 "dhgroup": "ffdhe3072" 00:16:25.753 } 00:16:25.753 } 00:16:25.753 ]' 00:16:25.753 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.753 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.753 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.753 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:25.753 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.753 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.753 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.753 14:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.013 14:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:16:26.013 14:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:16:26.584 14:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.584 14:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:26.584 14:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.584 14:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.584 14:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.584 14:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.584 14:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.584 14:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:26.584 14:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:26.844 14:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:26.844 14:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.844 14:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:26.844 14:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:26.844 14:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:26.844 14:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.844 14:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.844 14:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.844 14:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.844 14:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.844 14:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.844 14:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.844 14:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.104 00:16:27.104 14:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.104 14:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.104 14:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.364 14:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.364 14:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.364 14:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.364 14:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.364 14:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.364 14:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.364 { 00:16:27.364 "cntlid": 25, 00:16:27.364 "qid": 0, 00:16:27.364 "state": "enabled", 00:16:27.364 "thread": "nvmf_tgt_poll_group_000", 00:16:27.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:27.364 "listen_address": { 00:16:27.364 "trtype": "TCP", 00:16:27.364 "adrfam": "IPv4", 00:16:27.364 "traddr": "10.0.0.2", 00:16:27.364 "trsvcid": "4420" 00:16:27.364 }, 00:16:27.364 "peer_address": { 00:16:27.364 "trtype": "TCP", 00:16:27.364 "adrfam": "IPv4", 00:16:27.364 "traddr": "10.0.0.1", 00:16:27.364 "trsvcid": "38594" 00:16:27.364 }, 00:16:27.364 "auth": { 00:16:27.364 "state": "completed", 00:16:27.364 "digest": "sha256", 00:16:27.364 "dhgroup": "ffdhe4096" 00:16:27.364 } 00:16:27.364 } 00:16:27.364 ]' 00:16:27.364 14:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.364 14:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.364 14:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.364 14:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:27.364 14:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.365 14:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.365 14:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.365 14:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.626 14:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:16:27.626 14:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:16:28.197 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.197 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:28.197 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.197 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.197 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.197 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.197 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:28.197 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:28.458 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:28.458 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.458 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:28.458 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:28.458 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:28.458 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.458 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.458 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.458 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.458 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.458 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.458 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.458 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.719 00:16:28.719 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.719 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.719 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.980 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.980 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.980 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.980 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.980 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.980 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.980 { 00:16:28.980 "cntlid": 27, 00:16:28.980 "qid": 0, 00:16:28.980 "state": "enabled", 00:16:28.980 "thread": "nvmf_tgt_poll_group_000", 00:16:28.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:28.980 "listen_address": { 00:16:28.980 "trtype": "TCP", 00:16:28.980 "adrfam": "IPv4", 00:16:28.980 "traddr": "10.0.0.2", 00:16:28.980 "trsvcid": "4420" 00:16:28.980 }, 00:16:28.980 "peer_address": { 00:16:28.980 "trtype": "TCP", 00:16:28.980 "adrfam": "IPv4", 00:16:28.980 "traddr": "10.0.0.1", 00:16:28.980 "trsvcid": "38606" 00:16:28.980 }, 00:16:28.980 "auth": { 00:16:28.980 "state": "completed", 00:16:28.980 "digest": "sha256", 00:16:28.980 "dhgroup": "ffdhe4096" 00:16:28.980 } 00:16:28.980 } 00:16:28.980 ]' 00:16:28.980 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.980 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.980 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.980 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:28.980 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.980 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.980 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.980 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.241 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:16:29.241 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:16:29.812 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.812 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:29.812 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.812 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.812 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.812 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.812 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:29.813 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:30.074 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:30.074 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.074 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:30.074 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:30.074 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:30.074 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.074 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.074 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.074 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.074 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.074 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.074 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.074 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.335 00:16:30.335 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.335 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.335 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.596 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.596 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.596 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.596 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.596 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.596 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.596 { 00:16:30.596 "cntlid": 29, 00:16:30.596 "qid": 0, 00:16:30.596 "state": "enabled", 00:16:30.596 "thread": "nvmf_tgt_poll_group_000", 00:16:30.596 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:30.596 "listen_address": { 00:16:30.596 "trtype": "TCP", 00:16:30.596 "adrfam": "IPv4", 00:16:30.596 "traddr": "10.0.0.2", 00:16:30.596 "trsvcid": "4420" 00:16:30.596 }, 00:16:30.596 "peer_address": { 00:16:30.596 "trtype": "TCP", 00:16:30.596 "adrfam": "IPv4", 00:16:30.596 "traddr": "10.0.0.1", 00:16:30.596 "trsvcid": "56152" 00:16:30.596 }, 00:16:30.596 "auth": { 00:16:30.596 "state": "completed", 00:16:30.596 "digest": "sha256", 00:16:30.596 "dhgroup": "ffdhe4096" 00:16:30.596 } 00:16:30.596 } 00:16:30.596 ]' 00:16:30.596 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.596 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.596 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.596 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:30.596 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.596 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.596 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.596 14:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.857 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:16:30.857 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:16:31.428 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.688 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:31.688 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.688 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.688 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.688 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.688 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:31.688 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:31.688 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:31.688 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.688 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:31.688 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:31.688 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:31.688 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.688 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:31.688 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.688 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.688 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.688 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:31.688 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.688 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.948 00:16:31.948 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.948 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.948 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.208 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.208 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.208 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.208 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.208 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.208 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.208 { 00:16:32.208 "cntlid": 31, 00:16:32.208 "qid": 0, 00:16:32.208 "state": "enabled", 00:16:32.208 "thread": "nvmf_tgt_poll_group_000", 00:16:32.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:32.208 "listen_address": { 00:16:32.208 "trtype": "TCP", 00:16:32.208 "adrfam": "IPv4", 00:16:32.208 "traddr": "10.0.0.2", 00:16:32.208 "trsvcid": "4420" 00:16:32.208 }, 00:16:32.208 "peer_address": { 00:16:32.208 "trtype": "TCP", 00:16:32.208 "adrfam": "IPv4", 00:16:32.208 "traddr": "10.0.0.1", 00:16:32.208 "trsvcid": "56178" 00:16:32.208 }, 00:16:32.208 "auth": { 00:16:32.208 "state": "completed", 00:16:32.208 "digest": "sha256", 00:16:32.208 "dhgroup": "ffdhe4096" 00:16:32.208 } 00:16:32.208 } 00:16:32.208 ]' 00:16:32.208 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.208 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.208 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.208 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:32.208 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.468 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.468 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.468 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.468 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:16:32.468 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:16:33.409 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.409 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:33.409 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.409 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.409 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.409 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.409 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.409 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:33.409 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:33.409 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:33.409 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.409 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:33.409 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:33.409 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:33.409 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.409 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.409 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.409 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.409 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.409 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.409 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.409 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.671 00:16:33.671 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.671 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.671 14:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.932 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.932 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.932 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.932 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.932 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.932 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.932 { 00:16:33.932 "cntlid": 33, 00:16:33.932 "qid": 0, 00:16:33.932 "state": "enabled", 00:16:33.932 "thread": "nvmf_tgt_poll_group_000", 00:16:33.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:33.932 "listen_address": { 00:16:33.932 "trtype": "TCP", 00:16:33.932 "adrfam": "IPv4", 00:16:33.932 "traddr": "10.0.0.2", 00:16:33.932 "trsvcid": "4420" 00:16:33.932 }, 00:16:33.932 "peer_address": { 00:16:33.932 "trtype": "TCP", 00:16:33.932 "adrfam": "IPv4", 00:16:33.932 "traddr": "10.0.0.1", 00:16:33.932 "trsvcid": "56206" 00:16:33.932 }, 00:16:33.932 "auth": { 00:16:33.932 "state": "completed", 00:16:33.932 "digest": "sha256", 00:16:33.932 "dhgroup": "ffdhe6144" 00:16:33.932 } 00:16:33.932 } 00:16:33.932 ]' 00:16:33.932 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.932 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.932 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.932 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:33.932 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.193 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.193 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.193 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.193 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:16:34.193 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:16:35.224 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.225 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:35.225 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.225 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.225 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.225 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.225 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:35.225 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:35.225 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:35.225 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.225 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:35.225 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:35.225 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:35.225 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.225 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.225 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.225 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.225 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.225 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.225 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.225 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.540 00:16:35.540 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.540 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.540 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.540 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.540 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.857 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.857 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.857 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.857 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.857 { 00:16:35.857 "cntlid": 35, 00:16:35.857 "qid": 0, 00:16:35.857 "state": "enabled", 00:16:35.857 "thread": "nvmf_tgt_poll_group_000", 00:16:35.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:35.857 "listen_address": { 00:16:35.857 "trtype": "TCP", 00:16:35.857 "adrfam": "IPv4", 00:16:35.857 "traddr": "10.0.0.2", 00:16:35.857 "trsvcid": "4420" 00:16:35.857 }, 00:16:35.857 "peer_address": { 00:16:35.857 "trtype": "TCP", 00:16:35.857 "adrfam": "IPv4", 00:16:35.857 "traddr": "10.0.0.1", 00:16:35.857 "trsvcid": "56226" 00:16:35.857 }, 00:16:35.857 "auth": { 00:16:35.857 "state": "completed", 00:16:35.857 "digest": "sha256", 00:16:35.857 "dhgroup": "ffdhe6144" 00:16:35.857 } 00:16:35.857 } 00:16:35.857 ]' 00:16:35.858 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.858 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.858 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.858 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:35.858 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.858 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.858 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.858 14:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.141 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:16:36.141 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:16:36.712 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.712 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:36.712 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.712 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.712 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.712 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.712 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:36.712 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:36.712 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:36.712 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.712 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:36.712 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:36.712 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:36.712 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.712 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.712 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.712 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.712 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.712 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.712 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.712 14:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.285 00:16:37.285 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.285 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.285 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.285 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.285 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.285 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.285 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.285 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.285 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.285 { 00:16:37.285 "cntlid": 37, 00:16:37.285 "qid": 0, 00:16:37.285 "state": "enabled", 00:16:37.285 "thread": "nvmf_tgt_poll_group_000", 00:16:37.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:37.285 "listen_address": { 00:16:37.285 "trtype": "TCP", 00:16:37.285 "adrfam": "IPv4", 00:16:37.285 "traddr": "10.0.0.2", 00:16:37.285 "trsvcid": "4420" 00:16:37.285 }, 00:16:37.285 "peer_address": { 00:16:37.285 "trtype": "TCP", 00:16:37.285 "adrfam": "IPv4", 00:16:37.285 "traddr": "10.0.0.1", 00:16:37.285 "trsvcid": "56242" 00:16:37.285 }, 00:16:37.285 "auth": { 00:16:37.285 "state": "completed", 00:16:37.285 "digest": "sha256", 00:16:37.285 "dhgroup": "ffdhe6144" 00:16:37.285 } 00:16:37.285 } 00:16:37.285 ]' 00:16:37.285 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.285 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.285 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.546 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:37.546 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.546 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.546 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.546 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.806 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:16:37.806 14:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:16:38.376 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.376 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:38.376 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.376 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.376 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.376 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.376 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:38.376 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:38.636 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:38.636 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.636 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:38.636 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:38.636 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:38.636 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.636 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:38.636 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.636 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.636 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.636 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:38.636 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.636 14:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.897 00:16:38.897 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.897 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.897 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.158 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.158 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.158 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.158 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.158 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.158 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.158 { 00:16:39.158 "cntlid": 39, 00:16:39.158 "qid": 0, 00:16:39.158 "state": "enabled", 00:16:39.158 "thread": "nvmf_tgt_poll_group_000", 00:16:39.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:39.158 "listen_address": { 00:16:39.158 "trtype": "TCP", 00:16:39.158 "adrfam": "IPv4", 00:16:39.158 "traddr": "10.0.0.2", 00:16:39.158 "trsvcid": "4420" 00:16:39.158 }, 00:16:39.158 "peer_address": { 00:16:39.158 "trtype": "TCP", 00:16:39.158 "adrfam": "IPv4", 00:16:39.158 "traddr": "10.0.0.1", 00:16:39.158 "trsvcid": "56270" 00:16:39.158 }, 00:16:39.158 "auth": { 00:16:39.158 "state": "completed", 00:16:39.158 "digest": "sha256", 00:16:39.158 "dhgroup": "ffdhe6144" 00:16:39.158 } 00:16:39.158 } 00:16:39.158 ]' 00:16:39.158 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.158 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.158 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.158 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:39.158 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.158 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.158 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.158 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.418 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:16:39.418 14:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:16:39.987 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.987 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:39.988 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.988 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.988 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.988 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.988 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.988 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:39.988 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:40.248 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:40.248 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.248 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.248 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:40.248 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:40.248 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.248 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.248 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.248 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.248 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.248 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.248 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.248 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.818 00:16:40.818 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.818 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.818 14:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.818 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.818 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.818 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.818 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.818 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.818 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.818 { 00:16:40.818 "cntlid": 41, 00:16:40.818 "qid": 0, 00:16:40.818 "state": "enabled", 00:16:40.818 "thread": "nvmf_tgt_poll_group_000", 00:16:40.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:40.818 "listen_address": { 00:16:40.818 "trtype": "TCP", 00:16:40.818 "adrfam": "IPv4", 00:16:40.818 "traddr": "10.0.0.2", 00:16:40.818 "trsvcid": "4420" 00:16:40.818 }, 00:16:40.818 "peer_address": { 00:16:40.818 "trtype": "TCP", 00:16:40.818 "adrfam": "IPv4", 00:16:40.818 "traddr": "10.0.0.1", 00:16:40.818 "trsvcid": "44236" 00:16:40.818 }, 00:16:40.818 "auth": { 00:16:40.818 "state": "completed", 00:16:40.818 "digest": "sha256", 00:16:40.818 "dhgroup": "ffdhe8192" 00:16:40.818 } 00:16:40.818 } 00:16:40.818 ]' 00:16:40.818 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.079 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.079 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.079 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:41.079 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.079 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.079 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.079 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.339 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:16:41.339 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:16:41.910 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.910 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:41.910 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.910 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.910 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.910 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.910 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:41.910 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:42.172 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:42.172 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.172 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.172 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:42.172 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:42.172 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.172 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.172 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.172 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.172 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.172 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.172 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.172 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.433 00:16:42.694 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.694 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.694 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.694 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.694 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.694 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.694 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.694 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.694 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.694 { 00:16:42.694 "cntlid": 43, 00:16:42.694 "qid": 0, 00:16:42.694 "state": "enabled", 00:16:42.694 "thread": "nvmf_tgt_poll_group_000", 00:16:42.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:42.694 "listen_address": { 00:16:42.694 "trtype": "TCP", 00:16:42.694 "adrfam": "IPv4", 00:16:42.694 "traddr": "10.0.0.2", 00:16:42.694 "trsvcid": "4420" 00:16:42.694 }, 00:16:42.694 "peer_address": { 00:16:42.694 "trtype": "TCP", 00:16:42.694 "adrfam": "IPv4", 00:16:42.694 "traddr": "10.0.0.1", 00:16:42.694 "trsvcid": "44266" 00:16:42.694 }, 00:16:42.694 "auth": { 00:16:42.694 "state": "completed", 00:16:42.694 "digest": "sha256", 00:16:42.694 "dhgroup": "ffdhe8192" 00:16:42.694 } 00:16:42.694 } 00:16:42.694 ]' 00:16:42.694 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.694 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.955 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.955 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:42.955 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.955 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.955 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.955 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.216 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:16:43.216 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:16:43.789 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.789 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:43.789 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.789 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.789 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.789 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.789 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:43.789 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:44.051 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:44.051 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.051 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:44.051 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:44.051 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:44.051 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.051 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.051 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.051 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.051 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.051 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.051 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.051 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.312 00:16:44.312 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.312 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.312 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.573 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.573 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.573 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.573 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.573 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.573 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.573 { 00:16:44.573 "cntlid": 45, 00:16:44.573 "qid": 0, 00:16:44.573 "state": "enabled", 00:16:44.573 "thread": "nvmf_tgt_poll_group_000", 00:16:44.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:44.573 "listen_address": { 00:16:44.573 "trtype": "TCP", 00:16:44.573 "adrfam": "IPv4", 00:16:44.573 "traddr": "10.0.0.2", 00:16:44.573 "trsvcid": "4420" 00:16:44.573 }, 00:16:44.573 "peer_address": { 00:16:44.573 "trtype": "TCP", 00:16:44.573 "adrfam": "IPv4", 00:16:44.573 "traddr": "10.0.0.1", 00:16:44.573 "trsvcid": "44294" 00:16:44.573 }, 00:16:44.573 "auth": { 00:16:44.573 "state": "completed", 00:16:44.573 "digest": "sha256", 00:16:44.573 "dhgroup": "ffdhe8192" 00:16:44.573 } 00:16:44.573 } 00:16:44.573 ]' 00:16:44.573 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.573 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.573 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.834 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:44.834 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.834 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.834 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.834 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.834 14:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:16:44.834 14:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:16:45.777 14:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.777 14:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:45.777 14:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.777 14:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.777 14:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.777 14:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.777 14:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:45.777 14:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:45.777 14:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:45.777 14:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.777 14:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.777 14:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:45.777 14:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:45.777 14:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.777 14:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:45.777 14:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.777 14:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.777 14:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.777 14:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:45.777 14:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.777 14:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.346 00:16:46.346 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.346 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.346 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.346 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.346 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.346 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.346 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.607 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.607 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.607 { 00:16:46.607 "cntlid": 47, 00:16:46.607 "qid": 0, 00:16:46.607 "state": "enabled", 00:16:46.607 "thread": "nvmf_tgt_poll_group_000", 00:16:46.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:46.607 "listen_address": { 00:16:46.607 "trtype": "TCP", 00:16:46.607 "adrfam": "IPv4", 00:16:46.607 "traddr": "10.0.0.2", 00:16:46.607 "trsvcid": "4420" 00:16:46.607 }, 00:16:46.607 "peer_address": { 00:16:46.607 "trtype": "TCP", 00:16:46.607 "adrfam": "IPv4", 00:16:46.607 "traddr": "10.0.0.1", 00:16:46.607 "trsvcid": "44308" 00:16:46.607 }, 00:16:46.607 "auth": { 00:16:46.607 "state": "completed", 00:16:46.607 "digest": "sha256", 00:16:46.607 "dhgroup": "ffdhe8192" 00:16:46.607 } 00:16:46.607 } 00:16:46.607 ]' 00:16:46.607 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.607 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.607 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.607 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:46.607 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.607 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.607 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.607 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.868 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:16:46.868 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:16:47.440 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.440 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:47.440 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.440 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.440 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.440 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:47.440 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.440 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.440 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:47.440 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:47.701 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:47.701 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.701 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:47.701 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:47.701 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:47.701 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.701 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.701 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.701 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.701 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.701 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.701 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.701 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.962 00:16:47.962 14:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.962 14:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.962 14:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.962 14:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.962 14:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.962 14:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.962 14:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.962 14:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.962 14:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.962 { 00:16:47.962 "cntlid": 49, 00:16:47.962 "qid": 0, 00:16:47.962 "state": "enabled", 00:16:47.962 "thread": "nvmf_tgt_poll_group_000", 00:16:47.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:47.962 "listen_address": { 00:16:47.962 "trtype": "TCP", 00:16:47.962 "adrfam": "IPv4", 00:16:47.962 "traddr": "10.0.0.2", 00:16:47.962 "trsvcid": "4420" 00:16:47.962 }, 00:16:47.962 "peer_address": { 00:16:47.962 "trtype": "TCP", 00:16:47.962 "adrfam": "IPv4", 00:16:47.962 "traddr": "10.0.0.1", 00:16:47.962 "trsvcid": "44334" 00:16:47.962 }, 00:16:47.962 "auth": { 00:16:47.962 "state": "completed", 00:16:47.962 "digest": "sha384", 00:16:47.962 "dhgroup": "null" 00:16:47.962 } 00:16:47.962 } 00:16:47.962 ]' 00:16:47.962 14:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.224 14:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.224 14:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.224 14:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:48.224 14:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.224 14:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.224 14:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.224 14:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.485 14:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:16:48.485 14:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:16:49.057 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.057 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:49.057 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.057 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.057 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.057 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.057 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:49.057 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:49.318 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:49.318 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.318 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:49.318 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:49.318 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:49.318 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.318 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.318 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.318 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.318 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.318 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.318 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.318 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.579 00:16:49.579 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.579 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.579 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.579 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.579 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.579 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.579 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.579 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.579 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.579 { 00:16:49.579 "cntlid": 51, 00:16:49.579 "qid": 0, 00:16:49.579 "state": "enabled", 00:16:49.579 "thread": "nvmf_tgt_poll_group_000", 00:16:49.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:49.579 "listen_address": { 00:16:49.579 "trtype": "TCP", 00:16:49.579 "adrfam": "IPv4", 00:16:49.579 "traddr": "10.0.0.2", 00:16:49.579 "trsvcid": "4420" 00:16:49.579 }, 00:16:49.579 "peer_address": { 00:16:49.579 "trtype": "TCP", 00:16:49.579 "adrfam": "IPv4", 00:16:49.579 "traddr": "10.0.0.1", 00:16:49.579 "trsvcid": "44370" 00:16:49.579 }, 00:16:49.579 "auth": { 00:16:49.579 "state": "completed", 00:16:49.579 "digest": "sha384", 00:16:49.579 "dhgroup": "null" 00:16:49.579 } 00:16:49.579 } 00:16:49.579 ]' 00:16:49.579 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.841 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.841 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.841 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:49.841 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.841 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.841 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.841 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.841 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:16:49.841 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:16:50.782 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.782 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:50.782 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.782 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.782 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.782 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.782 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:50.782 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:50.782 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:50.782 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.782 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:50.782 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:50.782 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:50.782 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.782 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.782 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.782 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.782 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.782 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.782 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.782 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.043 00:16:51.043 14:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.043 14:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.043 14:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.304 14:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.304 14:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.304 14:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.304 14:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.304 14:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.304 14:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.304 { 00:16:51.304 "cntlid": 53, 00:16:51.304 "qid": 0, 00:16:51.304 "state": "enabled", 00:16:51.304 "thread": "nvmf_tgt_poll_group_000", 00:16:51.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:51.304 "listen_address": { 00:16:51.304 "trtype": "TCP", 00:16:51.304 "adrfam": "IPv4", 00:16:51.304 "traddr": "10.0.0.2", 00:16:51.304 "trsvcid": "4420" 00:16:51.304 }, 00:16:51.304 "peer_address": { 00:16:51.304 "trtype": "TCP", 00:16:51.304 "adrfam": "IPv4", 00:16:51.304 "traddr": "10.0.0.1", 00:16:51.304 "trsvcid": "36160" 00:16:51.304 }, 00:16:51.304 "auth": { 00:16:51.304 "state": "completed", 00:16:51.304 "digest": "sha384", 00:16:51.304 "dhgroup": "null" 00:16:51.304 } 00:16:51.304 } 00:16:51.304 ]' 00:16:51.304 14:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.304 14:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.304 14:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.304 14:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:51.304 14:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.304 14:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.304 14:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.304 14:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.570 14:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:16:51.570 14:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:16:52.146 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.146 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:52.146 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.146 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.146 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.146 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.146 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:52.146 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:52.411 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:52.411 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.411 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:52.411 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:52.411 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:52.411 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.411 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:52.411 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.411 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.411 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.411 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:52.411 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.411 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.671 00:16:52.671 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.671 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.671 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.671 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.671 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.671 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.671 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.931 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.931 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.931 { 00:16:52.931 "cntlid": 55, 00:16:52.931 "qid": 0, 00:16:52.931 "state": "enabled", 00:16:52.931 "thread": "nvmf_tgt_poll_group_000", 00:16:52.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:52.931 "listen_address": { 00:16:52.931 "trtype": "TCP", 00:16:52.931 "adrfam": "IPv4", 00:16:52.931 "traddr": "10.0.0.2", 00:16:52.931 "trsvcid": "4420" 00:16:52.931 }, 00:16:52.931 "peer_address": { 00:16:52.931 "trtype": "TCP", 00:16:52.931 "adrfam": "IPv4", 00:16:52.931 "traddr": "10.0.0.1", 00:16:52.931 "trsvcid": "36180" 00:16:52.931 }, 00:16:52.931 "auth": { 00:16:52.931 "state": "completed", 00:16:52.931 "digest": "sha384", 00:16:52.931 "dhgroup": "null" 00:16:52.931 } 00:16:52.931 } 00:16:52.931 ]' 00:16:52.931 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.931 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.931 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.931 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:52.931 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.931 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.931 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.931 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.191 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:16:53.191 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:16:53.763 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.763 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:53.763 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.763 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.763 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.763 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.763 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.763 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:53.763 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:54.024 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:54.024 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.024 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:54.024 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:54.024 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:54.024 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.024 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.024 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.024 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.024 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.024 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.024 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.024 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.286 00:16:54.286 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.286 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.286 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.286 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.286 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.286 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.286 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.286 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.286 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.286 { 00:16:54.286 "cntlid": 57, 00:16:54.286 "qid": 0, 00:16:54.286 "state": "enabled", 00:16:54.286 "thread": "nvmf_tgt_poll_group_000", 00:16:54.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:54.286 "listen_address": { 00:16:54.286 "trtype": "TCP", 00:16:54.286 "adrfam": "IPv4", 00:16:54.286 "traddr": "10.0.0.2", 00:16:54.286 "trsvcid": "4420" 00:16:54.286 }, 00:16:54.286 "peer_address": { 00:16:54.286 "trtype": "TCP", 00:16:54.286 "adrfam": "IPv4", 00:16:54.286 "traddr": "10.0.0.1", 00:16:54.286 "trsvcid": "36204" 00:16:54.286 }, 00:16:54.286 "auth": { 00:16:54.286 "state": "completed", 00:16:54.286 "digest": "sha384", 00:16:54.286 "dhgroup": "ffdhe2048" 00:16:54.286 } 00:16:54.286 } 00:16:54.286 ]' 00:16:54.286 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.548 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.548 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.548 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:54.548 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.548 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.548 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.548 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.809 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:16:54.809 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:16:55.381 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.381 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:55.381 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.381 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.381 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.381 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.381 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:55.381 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:55.642 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:55.642 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.642 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:55.642 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:55.642 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:55.642 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.642 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.642 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.642 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.642 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.642 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.642 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.642 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.642 00:16:55.904 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.904 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.904 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.904 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.904 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.904 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.904 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.904 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.904 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.904 { 00:16:55.904 "cntlid": 59, 00:16:55.904 "qid": 0, 00:16:55.904 "state": "enabled", 00:16:55.904 "thread": "nvmf_tgt_poll_group_000", 00:16:55.904 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:55.904 "listen_address": { 00:16:55.904 "trtype": "TCP", 00:16:55.904 "adrfam": "IPv4", 00:16:55.904 "traddr": "10.0.0.2", 00:16:55.904 "trsvcid": "4420" 00:16:55.904 }, 00:16:55.904 "peer_address": { 00:16:55.904 "trtype": "TCP", 00:16:55.904 "adrfam": "IPv4", 00:16:55.904 "traddr": "10.0.0.1", 00:16:55.904 "trsvcid": "36240" 00:16:55.904 }, 00:16:55.904 "auth": { 00:16:55.904 "state": "completed", 00:16:55.904 "digest": "sha384", 00:16:55.904 "dhgroup": "ffdhe2048" 00:16:55.904 } 00:16:55.904 } 00:16:55.904 ]' 00:16:55.904 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.904 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.904 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.166 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:56.166 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.166 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.166 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.166 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.166 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:16:56.166 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:16:57.110 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.110 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:57.110 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.110 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.110 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.110 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.110 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:57.110 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:57.110 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:57.110 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.110 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:57.110 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:57.110 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:57.110 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.110 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.110 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.110 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.110 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.110 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.110 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.110 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.370 00:16:57.370 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.370 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.370 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.631 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.631 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.631 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.631 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.631 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.631 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.631 { 00:16:57.631 "cntlid": 61, 00:16:57.631 "qid": 0, 00:16:57.631 "state": "enabled", 00:16:57.631 "thread": "nvmf_tgt_poll_group_000", 00:16:57.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:57.631 "listen_address": { 00:16:57.631 "trtype": "TCP", 00:16:57.631 "adrfam": "IPv4", 00:16:57.631 "traddr": "10.0.0.2", 00:16:57.631 "trsvcid": "4420" 00:16:57.631 }, 00:16:57.631 "peer_address": { 00:16:57.631 "trtype": "TCP", 00:16:57.631 "adrfam": "IPv4", 00:16:57.631 "traddr": "10.0.0.1", 00:16:57.631 "trsvcid": "36276" 00:16:57.631 }, 00:16:57.631 "auth": { 00:16:57.631 "state": "completed", 00:16:57.631 "digest": "sha384", 00:16:57.631 "dhgroup": "ffdhe2048" 00:16:57.631 } 00:16:57.631 } 00:16:57.631 ]' 00:16:57.631 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.631 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.631 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.631 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:57.631 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.631 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.631 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.631 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.892 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:16:57.892 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:16:58.465 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.465 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:58.465 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.465 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.465 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.465 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.465 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:58.465 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:58.726 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:58.726 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.726 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:58.726 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:58.726 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:58.726 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.726 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:58.726 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.726 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.726 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.726 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:58.726 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.726 14:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.987 00:16:58.987 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.987 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.987 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.249 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.249 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.249 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.249 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.249 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.249 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.249 { 00:16:59.249 "cntlid": 63, 00:16:59.249 "qid": 0, 00:16:59.249 "state": "enabled", 00:16:59.250 "thread": "nvmf_tgt_poll_group_000", 00:16:59.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:59.250 "listen_address": { 00:16:59.250 "trtype": "TCP", 00:16:59.250 "adrfam": "IPv4", 00:16:59.250 "traddr": "10.0.0.2", 00:16:59.250 "trsvcid": "4420" 00:16:59.250 }, 00:16:59.250 "peer_address": { 00:16:59.250 "trtype": "TCP", 00:16:59.250 "adrfam": "IPv4", 00:16:59.250 "traddr": "10.0.0.1", 00:16:59.250 "trsvcid": "36296" 00:16:59.250 }, 00:16:59.250 "auth": { 00:16:59.250 "state": "completed", 00:16:59.250 "digest": "sha384", 00:16:59.250 "dhgroup": "ffdhe2048" 00:16:59.250 } 00:16:59.250 } 00:16:59.250 ]' 00:16:59.250 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.250 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.250 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.250 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:59.250 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.250 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.250 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.250 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.511 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:16:59.511 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:17:00.084 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.084 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:00.084 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.084 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.084 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.084 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.084 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.084 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:00.084 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:00.345 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:00.345 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.345 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.345 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:00.345 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:00.345 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.345 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.345 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.345 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.345 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.345 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.345 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.345 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.605 00:17:00.605 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.605 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.605 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.865 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.865 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.865 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.865 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.865 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.865 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.865 { 00:17:00.865 "cntlid": 65, 00:17:00.865 "qid": 0, 00:17:00.865 "state": "enabled", 00:17:00.865 "thread": "nvmf_tgt_poll_group_000", 00:17:00.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:00.865 "listen_address": { 00:17:00.865 "trtype": "TCP", 00:17:00.865 "adrfam": "IPv4", 00:17:00.865 "traddr": "10.0.0.2", 00:17:00.865 "trsvcid": "4420" 00:17:00.865 }, 00:17:00.865 "peer_address": { 00:17:00.865 "trtype": "TCP", 00:17:00.865 "adrfam": "IPv4", 00:17:00.865 "traddr": "10.0.0.1", 00:17:00.865 "trsvcid": "53934" 00:17:00.865 }, 00:17:00.865 "auth": { 00:17:00.865 "state": "completed", 00:17:00.865 "digest": "sha384", 00:17:00.865 "dhgroup": "ffdhe3072" 00:17:00.865 } 00:17:00.865 } 00:17:00.865 ]' 00:17:00.865 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.865 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.865 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.865 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:00.865 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.865 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.865 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.865 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.127 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:17:01.127 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:17:01.698 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.698 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:01.698 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.698 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.698 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.698 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.698 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:01.698 14:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:01.959 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:01.959 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.959 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:01.959 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:01.959 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:01.959 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.959 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.959 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.959 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.959 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.959 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.959 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.959 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.220 00:17:02.220 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.220 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.220 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.481 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.481 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.481 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.481 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.481 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.481 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.481 { 00:17:02.481 "cntlid": 67, 00:17:02.481 "qid": 0, 00:17:02.481 "state": "enabled", 00:17:02.481 "thread": "nvmf_tgt_poll_group_000", 00:17:02.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:02.481 "listen_address": { 00:17:02.481 "trtype": "TCP", 00:17:02.481 "adrfam": "IPv4", 00:17:02.481 "traddr": "10.0.0.2", 00:17:02.481 "trsvcid": "4420" 00:17:02.481 }, 00:17:02.481 "peer_address": { 00:17:02.481 "trtype": "TCP", 00:17:02.481 "adrfam": "IPv4", 00:17:02.481 "traddr": "10.0.0.1", 00:17:02.481 "trsvcid": "53968" 00:17:02.481 }, 00:17:02.481 "auth": { 00:17:02.481 "state": "completed", 00:17:02.481 "digest": "sha384", 00:17:02.481 "dhgroup": "ffdhe3072" 00:17:02.481 } 00:17:02.481 } 00:17:02.481 ]' 00:17:02.481 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.481 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.481 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.482 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:02.482 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.482 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.482 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.482 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.742 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:17:02.742 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:17:03.313 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.313 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:03.313 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.313 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.313 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.313 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.313 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:03.313 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:03.574 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:03.574 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.574 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:03.574 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:03.574 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:03.574 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.574 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.574 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.574 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.574 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.574 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.574 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.574 14:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.836 00:17:03.836 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.836 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.836 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.096 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.096 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.096 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.096 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.096 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.096 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.096 { 00:17:04.096 "cntlid": 69, 00:17:04.096 "qid": 0, 00:17:04.096 "state": "enabled", 00:17:04.096 "thread": "nvmf_tgt_poll_group_000", 00:17:04.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:04.096 "listen_address": { 00:17:04.096 "trtype": "TCP", 00:17:04.096 "adrfam": "IPv4", 00:17:04.096 "traddr": "10.0.0.2", 00:17:04.096 "trsvcid": "4420" 00:17:04.096 }, 00:17:04.096 "peer_address": { 00:17:04.096 "trtype": "TCP", 00:17:04.096 "adrfam": "IPv4", 00:17:04.096 "traddr": "10.0.0.1", 00:17:04.096 "trsvcid": "53984" 00:17:04.096 }, 00:17:04.096 "auth": { 00:17:04.096 "state": "completed", 00:17:04.096 "digest": "sha384", 00:17:04.096 "dhgroup": "ffdhe3072" 00:17:04.096 } 00:17:04.096 } 00:17:04.096 ]' 00:17:04.096 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.096 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.096 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.096 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:04.096 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.096 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.096 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.096 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.355 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:17:04.355 14:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:17:04.926 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.926 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:04.926 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.926 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.926 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.926 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.926 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:04.926 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:05.187 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:05.187 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.187 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:05.187 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:05.187 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:05.187 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.187 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:05.187 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.187 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.187 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.187 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:05.187 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.187 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.448 00:17:05.448 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.448 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.448 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.709 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.709 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.709 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.709 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.709 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.709 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.709 { 00:17:05.709 "cntlid": 71, 00:17:05.709 "qid": 0, 00:17:05.709 "state": "enabled", 00:17:05.709 "thread": "nvmf_tgt_poll_group_000", 00:17:05.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:05.709 "listen_address": { 00:17:05.709 "trtype": "TCP", 00:17:05.709 "adrfam": "IPv4", 00:17:05.709 "traddr": "10.0.0.2", 00:17:05.709 "trsvcid": "4420" 00:17:05.709 }, 00:17:05.709 "peer_address": { 00:17:05.709 "trtype": "TCP", 00:17:05.709 "adrfam": "IPv4", 00:17:05.709 "traddr": "10.0.0.1", 00:17:05.709 "trsvcid": "53998" 00:17:05.709 }, 00:17:05.709 "auth": { 00:17:05.709 "state": "completed", 00:17:05.709 "digest": "sha384", 00:17:05.709 "dhgroup": "ffdhe3072" 00:17:05.709 } 00:17:05.709 } 00:17:05.709 ]' 00:17:05.709 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.709 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.709 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.709 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:05.709 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.709 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.709 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.709 14:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.970 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:17:05.970 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:17:06.542 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.542 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:06.542 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.542 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.542 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.542 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.542 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.542 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:06.542 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:06.804 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:06.804 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.804 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:06.804 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:06.804 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:06.804 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.804 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.804 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.804 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.804 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.804 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.804 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.804 14:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.066 00:17:07.066 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.066 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.066 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.327 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.327 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.327 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.327 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.327 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.327 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.327 { 00:17:07.327 "cntlid": 73, 00:17:07.327 "qid": 0, 00:17:07.327 "state": "enabled", 00:17:07.327 "thread": "nvmf_tgt_poll_group_000", 00:17:07.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:07.327 "listen_address": { 00:17:07.327 "trtype": "TCP", 00:17:07.327 "adrfam": "IPv4", 00:17:07.327 "traddr": "10.0.0.2", 00:17:07.327 "trsvcid": "4420" 00:17:07.327 }, 00:17:07.327 "peer_address": { 00:17:07.327 "trtype": "TCP", 00:17:07.327 "adrfam": "IPv4", 00:17:07.327 "traddr": "10.0.0.1", 00:17:07.327 "trsvcid": "54018" 00:17:07.327 }, 00:17:07.327 "auth": { 00:17:07.327 "state": "completed", 00:17:07.327 "digest": "sha384", 00:17:07.327 "dhgroup": "ffdhe4096" 00:17:07.327 } 00:17:07.327 } 00:17:07.327 ]' 00:17:07.327 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.327 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.327 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.327 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:07.327 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.327 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.327 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.327 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.588 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:17:07.588 14:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:17:08.159 14:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.159 14:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:08.159 14:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.159 14:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.159 14:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.159 14:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.159 14:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:08.159 14:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:08.420 14:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:08.420 14:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.420 14:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:08.420 14:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:08.420 14:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:08.420 14:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.420 14:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.420 14:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.420 14:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.420 14:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.420 14:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.420 14:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.420 14:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.682 00:17:08.682 14:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.682 14:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.682 14:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.943 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.943 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.943 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.943 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.943 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.943 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.943 { 00:17:08.943 "cntlid": 75, 00:17:08.943 "qid": 0, 00:17:08.943 "state": "enabled", 00:17:08.943 "thread": "nvmf_tgt_poll_group_000", 00:17:08.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:08.943 "listen_address": { 00:17:08.943 "trtype": "TCP", 00:17:08.943 "adrfam": "IPv4", 00:17:08.943 "traddr": "10.0.0.2", 00:17:08.943 "trsvcid": "4420" 00:17:08.943 }, 00:17:08.943 "peer_address": { 00:17:08.943 "trtype": "TCP", 00:17:08.943 "adrfam": "IPv4", 00:17:08.943 "traddr": "10.0.0.1", 00:17:08.943 "trsvcid": "54052" 00:17:08.943 }, 00:17:08.943 "auth": { 00:17:08.943 "state": "completed", 00:17:08.943 "digest": "sha384", 00:17:08.943 "dhgroup": "ffdhe4096" 00:17:08.943 } 00:17:08.943 } 00:17:08.943 ]' 00:17:08.943 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.943 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.943 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.943 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:08.943 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.943 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.943 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.943 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.205 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:17:09.205 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:17:09.777 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.777 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:09.777 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.777 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.777 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.777 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.777 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:09.777 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:10.036 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:10.036 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.036 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:10.036 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:10.036 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:10.036 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.036 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.036 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.036 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.036 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.036 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.036 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.036 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.296 00:17:10.296 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.296 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.296 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.557 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.557 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.557 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.557 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.557 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.557 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.557 { 00:17:10.557 "cntlid": 77, 00:17:10.557 "qid": 0, 00:17:10.557 "state": "enabled", 00:17:10.557 "thread": "nvmf_tgt_poll_group_000", 00:17:10.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:10.557 "listen_address": { 00:17:10.557 "trtype": "TCP", 00:17:10.557 "adrfam": "IPv4", 00:17:10.557 "traddr": "10.0.0.2", 00:17:10.557 "trsvcid": "4420" 00:17:10.557 }, 00:17:10.557 "peer_address": { 00:17:10.557 "trtype": "TCP", 00:17:10.557 "adrfam": "IPv4", 00:17:10.557 "traddr": "10.0.0.1", 00:17:10.557 "trsvcid": "48368" 00:17:10.557 }, 00:17:10.557 "auth": { 00:17:10.557 "state": "completed", 00:17:10.557 "digest": "sha384", 00:17:10.557 "dhgroup": "ffdhe4096" 00:17:10.557 } 00:17:10.557 } 00:17:10.557 ]' 00:17:10.557 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.557 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.557 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.557 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:10.557 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.557 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.557 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.557 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.818 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:17:10.818 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:17:11.389 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.389 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.389 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.389 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.389 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.389 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.389 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:11.389 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:11.650 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:11.650 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.650 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:11.650 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:11.650 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:11.650 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.650 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:11.650 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.650 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.650 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.650 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:11.650 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.650 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.911 00:17:11.911 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.911 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.911 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.173 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.173 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.173 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.173 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.173 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.173 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.173 { 00:17:12.173 "cntlid": 79, 00:17:12.173 "qid": 0, 00:17:12.173 "state": "enabled", 00:17:12.173 "thread": "nvmf_tgt_poll_group_000", 00:17:12.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:12.173 "listen_address": { 00:17:12.173 "trtype": "TCP", 00:17:12.173 "adrfam": "IPv4", 00:17:12.173 "traddr": "10.0.0.2", 00:17:12.173 "trsvcid": "4420" 00:17:12.173 }, 00:17:12.173 "peer_address": { 00:17:12.173 "trtype": "TCP", 00:17:12.173 "adrfam": "IPv4", 00:17:12.173 "traddr": "10.0.0.1", 00:17:12.173 "trsvcid": "48398" 00:17:12.173 }, 00:17:12.173 "auth": { 00:17:12.173 "state": "completed", 00:17:12.173 "digest": "sha384", 00:17:12.173 "dhgroup": "ffdhe4096" 00:17:12.173 } 00:17:12.173 } 00:17:12.173 ]' 00:17:12.173 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.173 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.173 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.173 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:12.173 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.173 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.173 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.173 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.434 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:17:12.434 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:17:13.005 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.005 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:13.005 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.005 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.005 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.005 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.005 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.005 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:13.005 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:13.266 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:13.266 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.266 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.266 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:13.266 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:13.266 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.266 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.266 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.266 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.266 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.266 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.266 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.266 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.530 00:17:13.530 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.530 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.791 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.791 14:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.791 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.791 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.791 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.791 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.791 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.791 { 00:17:13.791 "cntlid": 81, 00:17:13.791 "qid": 0, 00:17:13.791 "state": "enabled", 00:17:13.791 "thread": "nvmf_tgt_poll_group_000", 00:17:13.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:13.791 "listen_address": { 00:17:13.791 "trtype": "TCP", 00:17:13.791 "adrfam": "IPv4", 00:17:13.791 "traddr": "10.0.0.2", 00:17:13.791 "trsvcid": "4420" 00:17:13.791 }, 00:17:13.791 "peer_address": { 00:17:13.791 "trtype": "TCP", 00:17:13.791 "adrfam": "IPv4", 00:17:13.791 "traddr": "10.0.0.1", 00:17:13.791 "trsvcid": "48436" 00:17:13.791 }, 00:17:13.791 "auth": { 00:17:13.791 "state": "completed", 00:17:13.791 "digest": "sha384", 00:17:13.791 "dhgroup": "ffdhe6144" 00:17:13.791 } 00:17:13.791 } 00:17:13.791 ]' 00:17:13.791 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.791 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.791 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.052 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:14.052 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.052 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.052 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.052 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.052 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:17:14.052 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:17:14.997 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.997 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:14.997 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.997 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.997 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.997 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.997 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:14.997 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:14.997 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:14.997 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.997 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.997 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:14.997 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:14.997 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.997 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.997 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.997 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.997 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.997 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.997 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.997 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.258 00:17:15.258 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.258 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.258 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.520 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.520 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.520 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.520 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.520 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.520 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.520 { 00:17:15.520 "cntlid": 83, 00:17:15.520 "qid": 0, 00:17:15.520 "state": "enabled", 00:17:15.520 "thread": "nvmf_tgt_poll_group_000", 00:17:15.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:15.520 "listen_address": { 00:17:15.520 "trtype": "TCP", 00:17:15.520 "adrfam": "IPv4", 00:17:15.520 "traddr": "10.0.0.2", 00:17:15.520 "trsvcid": "4420" 00:17:15.520 }, 00:17:15.520 "peer_address": { 00:17:15.520 "trtype": "TCP", 00:17:15.520 "adrfam": "IPv4", 00:17:15.520 "traddr": "10.0.0.1", 00:17:15.520 "trsvcid": "48442" 00:17:15.520 }, 00:17:15.520 "auth": { 00:17:15.520 "state": "completed", 00:17:15.520 "digest": "sha384", 00:17:15.520 "dhgroup": "ffdhe6144" 00:17:15.520 } 00:17:15.520 } 00:17:15.520 ]' 00:17:15.520 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.520 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.520 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.520 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:15.520 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.781 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.781 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.781 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.781 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:17:15.781 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:17:16.723 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.723 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:16.723 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.723 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.723 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.723 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.723 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:16.723 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:16.723 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:16.723 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.723 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.723 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:16.723 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:16.723 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.723 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.723 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.723 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.723 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.723 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.723 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.723 14:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.986 00:17:16.986 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.986 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.986 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.247 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.247 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.247 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.247 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.247 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.247 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.247 { 00:17:17.247 "cntlid": 85, 00:17:17.247 "qid": 0, 00:17:17.247 "state": "enabled", 00:17:17.247 "thread": "nvmf_tgt_poll_group_000", 00:17:17.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:17.247 "listen_address": { 00:17:17.247 "trtype": "TCP", 00:17:17.247 "adrfam": "IPv4", 00:17:17.247 "traddr": "10.0.0.2", 00:17:17.247 "trsvcid": "4420" 00:17:17.247 }, 00:17:17.247 "peer_address": { 00:17:17.247 "trtype": "TCP", 00:17:17.247 "adrfam": "IPv4", 00:17:17.247 "traddr": "10.0.0.1", 00:17:17.247 "trsvcid": "48470" 00:17:17.247 }, 00:17:17.247 "auth": { 00:17:17.247 "state": "completed", 00:17:17.247 "digest": "sha384", 00:17:17.247 "dhgroup": "ffdhe6144" 00:17:17.247 } 00:17:17.247 } 00:17:17.247 ]' 00:17:17.247 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.247 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.247 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.247 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:17.247 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.508 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.508 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.508 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.508 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:17:17.508 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:17:18.452 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.452 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:18.452 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.452 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.452 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.452 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.452 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:18.452 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:18.452 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:18.452 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.452 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.452 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:18.452 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:18.452 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.452 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:18.452 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.452 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.452 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.452 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:18.452 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:18.452 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:18.714 00:17:18.714 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.714 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.714 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.975 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.975 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.975 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.975 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.975 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.975 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.975 { 00:17:18.975 "cntlid": 87, 00:17:18.975 "qid": 0, 00:17:18.975 "state": "enabled", 00:17:18.975 "thread": "nvmf_tgt_poll_group_000", 00:17:18.975 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:18.975 "listen_address": { 00:17:18.975 "trtype": "TCP", 00:17:18.975 "adrfam": "IPv4", 00:17:18.975 "traddr": "10.0.0.2", 00:17:18.975 "trsvcid": "4420" 00:17:18.975 }, 00:17:18.975 "peer_address": { 00:17:18.975 "trtype": "TCP", 00:17:18.975 "adrfam": "IPv4", 00:17:18.975 "traddr": "10.0.0.1", 00:17:18.975 "trsvcid": "48494" 00:17:18.975 }, 00:17:18.975 "auth": { 00:17:18.975 "state": "completed", 00:17:18.975 "digest": "sha384", 00:17:18.975 "dhgroup": "ffdhe6144" 00:17:18.975 } 00:17:18.975 } 00:17:18.975 ]' 00:17:18.975 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.975 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.975 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.975 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:18.975 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.975 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.975 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.975 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.235 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:17:19.235 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:17:19.807 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.807 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:19.807 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.807 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.807 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.807 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.807 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.807 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:19.807 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:20.067 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:20.067 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.067 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.067 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:20.067 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:20.067 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.067 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.067 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.067 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.067 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.067 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.067 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.067 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.638 00:17:20.638 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.638 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.638 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.638 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.638 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.638 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.638 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.638 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.638 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.638 { 00:17:20.638 "cntlid": 89, 00:17:20.638 "qid": 0, 00:17:20.638 "state": "enabled", 00:17:20.638 "thread": "nvmf_tgt_poll_group_000", 00:17:20.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:20.638 "listen_address": { 00:17:20.638 "trtype": "TCP", 00:17:20.638 "adrfam": "IPv4", 00:17:20.638 "traddr": "10.0.0.2", 00:17:20.638 "trsvcid": "4420" 00:17:20.638 }, 00:17:20.638 "peer_address": { 00:17:20.638 "trtype": "TCP", 00:17:20.638 "adrfam": "IPv4", 00:17:20.638 "traddr": "10.0.0.1", 00:17:20.638 "trsvcid": "35108" 00:17:20.638 }, 00:17:20.638 "auth": { 00:17:20.638 "state": "completed", 00:17:20.638 "digest": "sha384", 00:17:20.638 "dhgroup": "ffdhe8192" 00:17:20.638 } 00:17:20.638 } 00:17:20.638 ]' 00:17:20.638 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.899 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.899 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.899 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:20.899 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.899 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.899 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.899 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.159 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:17:21.159 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:17:21.729 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.729 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:21.729 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.729 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.729 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.729 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.729 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:21.729 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:21.989 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:21.989 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.989 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.989 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:21.989 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:21.989 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.989 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.989 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.989 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.989 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.989 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.989 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.989 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.250 00:17:22.510 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.510 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.510 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.510 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.510 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.510 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.510 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.510 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.510 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.510 { 00:17:22.510 "cntlid": 91, 00:17:22.510 "qid": 0, 00:17:22.510 "state": "enabled", 00:17:22.510 "thread": "nvmf_tgt_poll_group_000", 00:17:22.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:22.510 "listen_address": { 00:17:22.510 "trtype": "TCP", 00:17:22.510 "adrfam": "IPv4", 00:17:22.510 "traddr": "10.0.0.2", 00:17:22.510 "trsvcid": "4420" 00:17:22.510 }, 00:17:22.510 "peer_address": { 00:17:22.510 "trtype": "TCP", 00:17:22.510 "adrfam": "IPv4", 00:17:22.510 "traddr": "10.0.0.1", 00:17:22.510 "trsvcid": "35118" 00:17:22.510 }, 00:17:22.510 "auth": { 00:17:22.510 "state": "completed", 00:17:22.510 "digest": "sha384", 00:17:22.510 "dhgroup": "ffdhe8192" 00:17:22.510 } 00:17:22.510 } 00:17:22.510 ]' 00:17:22.510 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.771 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.771 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.771 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:22.771 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.771 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.771 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.771 14:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.032 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:17:23.032 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:17:23.603 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.603 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:23.603 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.603 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.603 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.603 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.603 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:23.603 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:23.865 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:23.865 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.865 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.865 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:23.865 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:23.865 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.865 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.865 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.865 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.865 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.865 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.865 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.865 14:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.437 00:17:24.437 14:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.437 14:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.437 14:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.437 14:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.437 14:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.437 14:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.437 14:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.437 14:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.437 14:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.437 { 00:17:24.437 "cntlid": 93, 00:17:24.437 "qid": 0, 00:17:24.437 "state": "enabled", 00:17:24.437 "thread": "nvmf_tgt_poll_group_000", 00:17:24.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:24.437 "listen_address": { 00:17:24.437 "trtype": "TCP", 00:17:24.437 "adrfam": "IPv4", 00:17:24.437 "traddr": "10.0.0.2", 00:17:24.437 "trsvcid": "4420" 00:17:24.437 }, 00:17:24.437 "peer_address": { 00:17:24.437 "trtype": "TCP", 00:17:24.437 "adrfam": "IPv4", 00:17:24.437 "traddr": "10.0.0.1", 00:17:24.437 "trsvcid": "35134" 00:17:24.437 }, 00:17:24.437 "auth": { 00:17:24.437 "state": "completed", 00:17:24.437 "digest": "sha384", 00:17:24.437 "dhgroup": "ffdhe8192" 00:17:24.437 } 00:17:24.437 } 00:17:24.437 ]' 00:17:24.437 14:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.437 14:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.437 14:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.437 14:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:24.437 14:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.698 14:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.698 14:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.698 14:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.698 14:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:17:24.698 14:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:17:25.640 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.640 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:25.640 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.640 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.640 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.640 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.640 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:25.640 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:25.640 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:25.640 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.640 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.640 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:25.640 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:25.640 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.640 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:25.640 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.640 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.640 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.640 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:25.640 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:25.640 14:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.211 00:17:26.211 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.211 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.211 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.211 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.211 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.211 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.211 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.211 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.211 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.211 { 00:17:26.211 "cntlid": 95, 00:17:26.211 "qid": 0, 00:17:26.211 "state": "enabled", 00:17:26.211 "thread": "nvmf_tgt_poll_group_000", 00:17:26.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:26.211 "listen_address": { 00:17:26.211 "trtype": "TCP", 00:17:26.211 "adrfam": "IPv4", 00:17:26.211 "traddr": "10.0.0.2", 00:17:26.211 "trsvcid": "4420" 00:17:26.211 }, 00:17:26.211 "peer_address": { 00:17:26.211 "trtype": "TCP", 00:17:26.211 "adrfam": "IPv4", 00:17:26.211 "traddr": "10.0.0.1", 00:17:26.211 "trsvcid": "35162" 00:17:26.211 }, 00:17:26.211 "auth": { 00:17:26.211 "state": "completed", 00:17:26.211 "digest": "sha384", 00:17:26.211 "dhgroup": "ffdhe8192" 00:17:26.211 } 00:17:26.211 } 00:17:26.211 ]' 00:17:26.211 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.211 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.211 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.472 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:26.472 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.472 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.472 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.472 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.732 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:17:26.732 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:17:27.302 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.302 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:27.302 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.302 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.302 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.302 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:27.302 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.302 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.302 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:27.302 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:27.562 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:27.562 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.562 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:27.562 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:27.562 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:27.562 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.562 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.562 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.562 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.562 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.562 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.562 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.562 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.562 00:17:27.823 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.823 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.823 14:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.823 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.823 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.823 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.823 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.823 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.823 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.823 { 00:17:27.823 "cntlid": 97, 00:17:27.823 "qid": 0, 00:17:27.823 "state": "enabled", 00:17:27.823 "thread": "nvmf_tgt_poll_group_000", 00:17:27.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:27.823 "listen_address": { 00:17:27.823 "trtype": "TCP", 00:17:27.823 "adrfam": "IPv4", 00:17:27.823 "traddr": "10.0.0.2", 00:17:27.823 "trsvcid": "4420" 00:17:27.823 }, 00:17:27.823 "peer_address": { 00:17:27.823 "trtype": "TCP", 00:17:27.823 "adrfam": "IPv4", 00:17:27.823 "traddr": "10.0.0.1", 00:17:27.823 "trsvcid": "35190" 00:17:27.823 }, 00:17:27.823 "auth": { 00:17:27.823 "state": "completed", 00:17:27.823 "digest": "sha512", 00:17:27.823 "dhgroup": "null" 00:17:27.823 } 00:17:27.823 } 00:17:27.823 ]' 00:17:27.823 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.823 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.823 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.083 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:28.083 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.083 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.083 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.083 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.083 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:17:28.083 14:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:17:29.026 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.026 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:29.026 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.026 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.026 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.026 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.026 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:29.026 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:29.026 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:29.026 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.026 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:29.026 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:29.026 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:29.026 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.026 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.026 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.026 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.026 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.026 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.026 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.026 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.286 00:17:29.286 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.286 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.286 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.546 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.546 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.546 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.546 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.546 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.546 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.546 { 00:17:29.546 "cntlid": 99, 00:17:29.546 "qid": 0, 00:17:29.546 "state": "enabled", 00:17:29.546 "thread": "nvmf_tgt_poll_group_000", 00:17:29.546 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:29.546 "listen_address": { 00:17:29.546 "trtype": "TCP", 00:17:29.546 "adrfam": "IPv4", 00:17:29.546 "traddr": "10.0.0.2", 00:17:29.546 "trsvcid": "4420" 00:17:29.546 }, 00:17:29.546 "peer_address": { 00:17:29.546 "trtype": "TCP", 00:17:29.546 "adrfam": "IPv4", 00:17:29.546 "traddr": "10.0.0.1", 00:17:29.546 "trsvcid": "35214" 00:17:29.546 }, 00:17:29.546 "auth": { 00:17:29.546 "state": "completed", 00:17:29.546 "digest": "sha512", 00:17:29.546 "dhgroup": "null" 00:17:29.546 } 00:17:29.546 } 00:17:29.546 ]' 00:17:29.546 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.546 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.546 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.546 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:29.546 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.546 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.546 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.546 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.807 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:17:29.807 14:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:17:30.378 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.378 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.378 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.378 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.378 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.378 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.378 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:30.378 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:30.638 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:30.638 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.638 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:30.638 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:30.638 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:30.638 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.638 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.638 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.638 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.638 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.638 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.638 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.638 14:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.899 00:17:30.899 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.899 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.899 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.161 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.161 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.161 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.161 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.161 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.161 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.161 { 00:17:31.161 "cntlid": 101, 00:17:31.161 "qid": 0, 00:17:31.161 "state": "enabled", 00:17:31.161 "thread": "nvmf_tgt_poll_group_000", 00:17:31.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:31.161 "listen_address": { 00:17:31.161 "trtype": "TCP", 00:17:31.161 "adrfam": "IPv4", 00:17:31.161 "traddr": "10.0.0.2", 00:17:31.161 "trsvcid": "4420" 00:17:31.161 }, 00:17:31.161 "peer_address": { 00:17:31.161 "trtype": "TCP", 00:17:31.161 "adrfam": "IPv4", 00:17:31.161 "traddr": "10.0.0.1", 00:17:31.161 "trsvcid": "32910" 00:17:31.161 }, 00:17:31.161 "auth": { 00:17:31.161 "state": "completed", 00:17:31.161 "digest": "sha512", 00:17:31.161 "dhgroup": "null" 00:17:31.161 } 00:17:31.161 } 00:17:31.161 ]' 00:17:31.161 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.161 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.161 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.161 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:31.161 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.161 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.161 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.161 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.422 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:17:31.422 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:17:31.992 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.992 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:31.992 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.992 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.992 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.992 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.992 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:31.992 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:32.253 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:32.253 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.253 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:32.253 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:32.253 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:32.253 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.253 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:32.253 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.253 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.253 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.253 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:32.253 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:32.253 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:32.514 00:17:32.514 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.514 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.514 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.775 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.775 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.775 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.775 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.775 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.775 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.775 { 00:17:32.775 "cntlid": 103, 00:17:32.775 "qid": 0, 00:17:32.775 "state": "enabled", 00:17:32.775 "thread": "nvmf_tgt_poll_group_000", 00:17:32.775 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:32.775 "listen_address": { 00:17:32.775 "trtype": "TCP", 00:17:32.775 "adrfam": "IPv4", 00:17:32.775 "traddr": "10.0.0.2", 00:17:32.775 "trsvcid": "4420" 00:17:32.775 }, 00:17:32.775 "peer_address": { 00:17:32.775 "trtype": "TCP", 00:17:32.775 "adrfam": "IPv4", 00:17:32.775 "traddr": "10.0.0.1", 00:17:32.775 "trsvcid": "32940" 00:17:32.775 }, 00:17:32.775 "auth": { 00:17:32.775 "state": "completed", 00:17:32.775 "digest": "sha512", 00:17:32.775 "dhgroup": "null" 00:17:32.775 } 00:17:32.775 } 00:17:32.775 ]' 00:17:32.775 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.775 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.775 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.775 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:32.775 14:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.775 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.775 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.775 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.036 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:17:33.036 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:17:33.608 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.608 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:33.608 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.608 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.608 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.608 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:33.608 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.608 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:33.608 14:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:33.870 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:33.870 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.870 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:33.870 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:33.870 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:33.870 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.870 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.870 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.870 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.870 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.870 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.870 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.870 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.136 00:17:34.136 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.136 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.136 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.423 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.423 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.423 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.423 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.423 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.423 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.423 { 00:17:34.423 "cntlid": 105, 00:17:34.423 "qid": 0, 00:17:34.423 "state": "enabled", 00:17:34.423 "thread": "nvmf_tgt_poll_group_000", 00:17:34.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:34.424 "listen_address": { 00:17:34.424 "trtype": "TCP", 00:17:34.424 "adrfam": "IPv4", 00:17:34.424 "traddr": "10.0.0.2", 00:17:34.424 "trsvcid": "4420" 00:17:34.424 }, 00:17:34.424 "peer_address": { 00:17:34.424 "trtype": "TCP", 00:17:34.424 "adrfam": "IPv4", 00:17:34.424 "traddr": "10.0.0.1", 00:17:34.424 "trsvcid": "32972" 00:17:34.424 }, 00:17:34.424 "auth": { 00:17:34.424 "state": "completed", 00:17:34.424 "digest": "sha512", 00:17:34.424 "dhgroup": "ffdhe2048" 00:17:34.424 } 00:17:34.424 } 00:17:34.424 ]' 00:17:34.424 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.424 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.424 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.424 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:34.424 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.424 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.424 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.424 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.697 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:17:34.697 14:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:17:35.307 14:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.307 14:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.307 14:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.307 14:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.307 14:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.307 14:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.307 14:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:35.307 14:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:35.574 14:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:35.574 14:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.574 14:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:35.574 14:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:35.574 14:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:35.574 14:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.574 14:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.574 14:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.574 14:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.574 14:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.574 14:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.574 14:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.574 14:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.834 00:17:35.834 14:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.834 14:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.834 14:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.095 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.095 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.095 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.095 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.095 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.095 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.095 { 00:17:36.095 "cntlid": 107, 00:17:36.095 "qid": 0, 00:17:36.095 "state": "enabled", 00:17:36.095 "thread": "nvmf_tgt_poll_group_000", 00:17:36.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:36.095 "listen_address": { 00:17:36.095 "trtype": "TCP", 00:17:36.095 "adrfam": "IPv4", 00:17:36.095 "traddr": "10.0.0.2", 00:17:36.095 "trsvcid": "4420" 00:17:36.095 }, 00:17:36.095 "peer_address": { 00:17:36.095 "trtype": "TCP", 00:17:36.095 "adrfam": "IPv4", 00:17:36.095 "traddr": "10.0.0.1", 00:17:36.095 "trsvcid": "32992" 00:17:36.095 }, 00:17:36.095 "auth": { 00:17:36.095 "state": "completed", 00:17:36.095 "digest": "sha512", 00:17:36.095 "dhgroup": "ffdhe2048" 00:17:36.095 } 00:17:36.095 } 00:17:36.095 ]' 00:17:36.095 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.095 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.095 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.095 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:36.095 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.095 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.095 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.095 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.355 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:17:36.355 14:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:17:36.927 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.927 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.927 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.927 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.927 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.927 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.927 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:36.927 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:37.189 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:37.189 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.189 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:37.189 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:37.189 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:37.189 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.189 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.189 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.189 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.189 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.189 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.189 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.189 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.450 00:17:37.450 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.450 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.450 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.711 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.711 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.711 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.712 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.712 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.712 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.712 { 00:17:37.712 "cntlid": 109, 00:17:37.712 "qid": 0, 00:17:37.712 "state": "enabled", 00:17:37.712 "thread": "nvmf_tgt_poll_group_000", 00:17:37.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:37.712 "listen_address": { 00:17:37.712 "trtype": "TCP", 00:17:37.712 "adrfam": "IPv4", 00:17:37.712 "traddr": "10.0.0.2", 00:17:37.712 "trsvcid": "4420" 00:17:37.712 }, 00:17:37.712 "peer_address": { 00:17:37.712 "trtype": "TCP", 00:17:37.712 "adrfam": "IPv4", 00:17:37.712 "traddr": "10.0.0.1", 00:17:37.712 "trsvcid": "33020" 00:17:37.712 }, 00:17:37.712 "auth": { 00:17:37.712 "state": "completed", 00:17:37.712 "digest": "sha512", 00:17:37.712 "dhgroup": "ffdhe2048" 00:17:37.712 } 00:17:37.712 } 00:17:37.712 ]' 00:17:37.712 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.712 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.712 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.712 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:37.712 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.712 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.712 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.712 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.973 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:17:37.973 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:17:38.544 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.544 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.544 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.544 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.544 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.544 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.544 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:38.545 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:38.806 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:38.806 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.806 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:38.806 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:38.806 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:38.806 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.806 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:38.806 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.806 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.806 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.806 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:38.806 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:38.806 14:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:39.067 00:17:39.067 14:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.067 14:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.067 14:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.328 14:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.328 14:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.328 14:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.328 14:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.328 14:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.328 14:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.328 { 00:17:39.328 "cntlid": 111, 00:17:39.328 "qid": 0, 00:17:39.328 "state": "enabled", 00:17:39.328 "thread": "nvmf_tgt_poll_group_000", 00:17:39.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:39.328 "listen_address": { 00:17:39.328 "trtype": "TCP", 00:17:39.328 "adrfam": "IPv4", 00:17:39.328 "traddr": "10.0.0.2", 00:17:39.328 "trsvcid": "4420" 00:17:39.328 }, 00:17:39.328 "peer_address": { 00:17:39.328 "trtype": "TCP", 00:17:39.328 "adrfam": "IPv4", 00:17:39.328 "traddr": "10.0.0.1", 00:17:39.328 "trsvcid": "33048" 00:17:39.328 }, 00:17:39.328 "auth": { 00:17:39.328 "state": "completed", 00:17:39.328 "digest": "sha512", 00:17:39.328 "dhgroup": "ffdhe2048" 00:17:39.328 } 00:17:39.328 } 00:17:39.328 ]' 00:17:39.328 14:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.328 14:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.328 14:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.328 14:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:39.328 14:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.328 14:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.328 14:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.328 14:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.590 14:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:17:39.590 14:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:17:40.160 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.160 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.160 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.160 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.160 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.160 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:40.160 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.160 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:40.160 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:40.420 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:40.420 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.420 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:40.420 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:40.420 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:40.420 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.420 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.421 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.421 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.421 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.421 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.421 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.421 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.681 00:17:40.681 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.681 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.681 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.681 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.681 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.681 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.681 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.681 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.681 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.681 { 00:17:40.681 "cntlid": 113, 00:17:40.681 "qid": 0, 00:17:40.681 "state": "enabled", 00:17:40.681 "thread": "nvmf_tgt_poll_group_000", 00:17:40.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:40.681 "listen_address": { 00:17:40.681 "trtype": "TCP", 00:17:40.681 "adrfam": "IPv4", 00:17:40.681 "traddr": "10.0.0.2", 00:17:40.681 "trsvcid": "4420" 00:17:40.681 }, 00:17:40.681 "peer_address": { 00:17:40.681 "trtype": "TCP", 00:17:40.681 "adrfam": "IPv4", 00:17:40.681 "traddr": "10.0.0.1", 00:17:40.681 "trsvcid": "45350" 00:17:40.681 }, 00:17:40.681 "auth": { 00:17:40.681 "state": "completed", 00:17:40.681 "digest": "sha512", 00:17:40.681 "dhgroup": "ffdhe3072" 00:17:40.681 } 00:17:40.681 } 00:17:40.681 ]' 00:17:40.681 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.942 14:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.942 14:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.942 14:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:40.942 14:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.942 14:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.942 14:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.942 14:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.203 14:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:17:41.203 14:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:17:41.774 14:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.774 14:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.774 14:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.774 14:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.774 14:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.774 14:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.774 14:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:41.774 14:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:42.034 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:42.034 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.034 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:42.034 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:42.034 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:42.034 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.034 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.034 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.034 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.034 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.034 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.034 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.035 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.295 00:17:42.295 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.295 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.295 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.555 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.555 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.555 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.555 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.555 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.555 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.555 { 00:17:42.555 "cntlid": 115, 00:17:42.555 "qid": 0, 00:17:42.555 "state": "enabled", 00:17:42.555 "thread": "nvmf_tgt_poll_group_000", 00:17:42.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:42.555 "listen_address": { 00:17:42.555 "trtype": "TCP", 00:17:42.555 "adrfam": "IPv4", 00:17:42.555 "traddr": "10.0.0.2", 00:17:42.555 "trsvcid": "4420" 00:17:42.555 }, 00:17:42.555 "peer_address": { 00:17:42.555 "trtype": "TCP", 00:17:42.555 "adrfam": "IPv4", 00:17:42.555 "traddr": "10.0.0.1", 00:17:42.555 "trsvcid": "45380" 00:17:42.555 }, 00:17:42.555 "auth": { 00:17:42.555 "state": "completed", 00:17:42.555 "digest": "sha512", 00:17:42.555 "dhgroup": "ffdhe3072" 00:17:42.555 } 00:17:42.555 } 00:17:42.555 ]' 00:17:42.555 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.555 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.555 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.555 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:42.555 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.555 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.555 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.555 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.816 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:17:42.816 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:17:43.387 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.387 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:43.387 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.387 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.387 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.387 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.387 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:43.387 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:43.648 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:43.648 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.648 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.648 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:43.648 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:43.648 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.648 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.648 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.648 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.648 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.648 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.648 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.648 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.908 00:17:43.908 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.908 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.908 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.169 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.169 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.169 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.169 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.169 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.169 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.169 { 00:17:44.169 "cntlid": 117, 00:17:44.169 "qid": 0, 00:17:44.169 "state": "enabled", 00:17:44.169 "thread": "nvmf_tgt_poll_group_000", 00:17:44.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:44.169 "listen_address": { 00:17:44.169 "trtype": "TCP", 00:17:44.169 "adrfam": "IPv4", 00:17:44.169 "traddr": "10.0.0.2", 00:17:44.169 "trsvcid": "4420" 00:17:44.169 }, 00:17:44.169 "peer_address": { 00:17:44.169 "trtype": "TCP", 00:17:44.169 "adrfam": "IPv4", 00:17:44.169 "traddr": "10.0.0.1", 00:17:44.169 "trsvcid": "45412" 00:17:44.169 }, 00:17:44.169 "auth": { 00:17:44.169 "state": "completed", 00:17:44.169 "digest": "sha512", 00:17:44.169 "dhgroup": "ffdhe3072" 00:17:44.169 } 00:17:44.169 } 00:17:44.169 ]' 00:17:44.169 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.169 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.169 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.169 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:44.169 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.169 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.169 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.169 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.431 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:17:44.431 14:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:17:45.003 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.003 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.003 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.003 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.003 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.003 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.003 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:45.003 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:45.264 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:45.264 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.264 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.264 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:45.264 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:45.264 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.264 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:45.264 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.264 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.264 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.264 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:45.264 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.264 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.524 00:17:45.524 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.524 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.524 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.785 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.785 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.785 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.785 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.785 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.785 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.785 { 00:17:45.785 "cntlid": 119, 00:17:45.785 "qid": 0, 00:17:45.785 "state": "enabled", 00:17:45.785 "thread": "nvmf_tgt_poll_group_000", 00:17:45.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:45.785 "listen_address": { 00:17:45.785 "trtype": "TCP", 00:17:45.786 "adrfam": "IPv4", 00:17:45.786 "traddr": "10.0.0.2", 00:17:45.786 "trsvcid": "4420" 00:17:45.786 }, 00:17:45.786 "peer_address": { 00:17:45.786 "trtype": "TCP", 00:17:45.786 "adrfam": "IPv4", 00:17:45.786 "traddr": "10.0.0.1", 00:17:45.786 "trsvcid": "45452" 00:17:45.786 }, 00:17:45.786 "auth": { 00:17:45.786 "state": "completed", 00:17:45.786 "digest": "sha512", 00:17:45.786 "dhgroup": "ffdhe3072" 00:17:45.786 } 00:17:45.786 } 00:17:45.786 ]' 00:17:45.786 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.786 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.786 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.786 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:45.786 14:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.786 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.786 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.786 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.046 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:17:46.046 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:17:46.620 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.620 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:46.620 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.620 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.620 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.620 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.620 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.620 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:46.620 14:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:46.881 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:46.881 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.881 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:46.881 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:46.881 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:46.881 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.881 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.881 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.881 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.881 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.881 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.881 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.881 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.143 00:17:47.143 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.143 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.143 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.404 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.404 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.404 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.404 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.404 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.404 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.404 { 00:17:47.404 "cntlid": 121, 00:17:47.404 "qid": 0, 00:17:47.404 "state": "enabled", 00:17:47.404 "thread": "nvmf_tgt_poll_group_000", 00:17:47.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:47.404 "listen_address": { 00:17:47.404 "trtype": "TCP", 00:17:47.404 "adrfam": "IPv4", 00:17:47.404 "traddr": "10.0.0.2", 00:17:47.404 "trsvcid": "4420" 00:17:47.404 }, 00:17:47.404 "peer_address": { 00:17:47.404 "trtype": "TCP", 00:17:47.404 "adrfam": "IPv4", 00:17:47.404 "traddr": "10.0.0.1", 00:17:47.404 "trsvcid": "45478" 00:17:47.404 }, 00:17:47.404 "auth": { 00:17:47.404 "state": "completed", 00:17:47.404 "digest": "sha512", 00:17:47.404 "dhgroup": "ffdhe4096" 00:17:47.404 } 00:17:47.404 } 00:17:47.404 ]' 00:17:47.404 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.404 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.404 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.404 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:47.404 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.404 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.404 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.404 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.665 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:17:47.665 14:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:17:48.238 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.238 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:48.238 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.238 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.238 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.238 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.238 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:48.238 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:48.500 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:48.500 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.500 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:48.500 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:48.500 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:48.500 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.500 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.500 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.500 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.500 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.500 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.500 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.500 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.761 00:17:48.761 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.761 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.761 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.022 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.022 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.022 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.022 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.022 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.022 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.022 { 00:17:49.023 "cntlid": 123, 00:17:49.023 "qid": 0, 00:17:49.023 "state": "enabled", 00:17:49.023 "thread": "nvmf_tgt_poll_group_000", 00:17:49.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:49.023 "listen_address": { 00:17:49.023 "trtype": "TCP", 00:17:49.023 "adrfam": "IPv4", 00:17:49.023 "traddr": "10.0.0.2", 00:17:49.023 "trsvcid": "4420" 00:17:49.023 }, 00:17:49.023 "peer_address": { 00:17:49.023 "trtype": "TCP", 00:17:49.023 "adrfam": "IPv4", 00:17:49.023 "traddr": "10.0.0.1", 00:17:49.023 "trsvcid": "45514" 00:17:49.023 }, 00:17:49.023 "auth": { 00:17:49.023 "state": "completed", 00:17:49.023 "digest": "sha512", 00:17:49.023 "dhgroup": "ffdhe4096" 00:17:49.023 } 00:17:49.023 } 00:17:49.023 ]' 00:17:49.023 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.023 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.023 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.023 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:49.023 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.023 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.023 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.023 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.284 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:17:49.284 14:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:17:49.855 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.855 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.855 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.855 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.855 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.855 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.855 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.855 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:50.116 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:50.116 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.116 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.116 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:50.116 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:50.116 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.116 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.116 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.116 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.116 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.116 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.116 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.116 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.377 00:17:50.377 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.377 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.377 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.639 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.639 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.639 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.639 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.639 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.639 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.639 { 00:17:50.639 "cntlid": 125, 00:17:50.639 "qid": 0, 00:17:50.639 "state": "enabled", 00:17:50.639 "thread": "nvmf_tgt_poll_group_000", 00:17:50.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:50.639 "listen_address": { 00:17:50.639 "trtype": "TCP", 00:17:50.639 "adrfam": "IPv4", 00:17:50.639 "traddr": "10.0.0.2", 00:17:50.639 "trsvcid": "4420" 00:17:50.639 }, 00:17:50.639 "peer_address": { 00:17:50.639 "trtype": "TCP", 00:17:50.639 "adrfam": "IPv4", 00:17:50.639 "traddr": "10.0.0.1", 00:17:50.639 "trsvcid": "55538" 00:17:50.639 }, 00:17:50.639 "auth": { 00:17:50.639 "state": "completed", 00:17:50.639 "digest": "sha512", 00:17:50.639 "dhgroup": "ffdhe4096" 00:17:50.639 } 00:17:50.639 } 00:17:50.639 ]' 00:17:50.639 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.639 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.639 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.639 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:50.639 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.639 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.639 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.639 14:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.899 14:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:17:50.899 14:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:17:51.471 14:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.471 14:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.471 14:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.471 14:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.732 14:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.732 14:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.732 14:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:51.732 14:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:51.732 14:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:51.732 14:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.732 14:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:51.732 14:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:51.732 14:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:51.732 14:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.732 14:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:51.732 14:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.732 14:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.732 14:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.732 14:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:51.732 14:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.732 14:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.992 00:17:51.992 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.992 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.992 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.252 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.252 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.252 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.252 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.252 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.252 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.252 { 00:17:52.252 "cntlid": 127, 00:17:52.252 "qid": 0, 00:17:52.252 "state": "enabled", 00:17:52.252 "thread": "nvmf_tgt_poll_group_000", 00:17:52.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:52.252 "listen_address": { 00:17:52.252 "trtype": "TCP", 00:17:52.252 "adrfam": "IPv4", 00:17:52.252 "traddr": "10.0.0.2", 00:17:52.252 "trsvcid": "4420" 00:17:52.252 }, 00:17:52.252 "peer_address": { 00:17:52.252 "trtype": "TCP", 00:17:52.252 "adrfam": "IPv4", 00:17:52.252 "traddr": "10.0.0.1", 00:17:52.252 "trsvcid": "55566" 00:17:52.252 }, 00:17:52.252 "auth": { 00:17:52.252 "state": "completed", 00:17:52.252 "digest": "sha512", 00:17:52.252 "dhgroup": "ffdhe4096" 00:17:52.252 } 00:17:52.252 } 00:17:52.252 ]' 00:17:52.252 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.252 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.252 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.252 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:52.252 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.513 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.513 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.513 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.513 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:17:52.513 14:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:17:53.083 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.083 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.083 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.083 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.083 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.083 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:53.083 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.083 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:53.083 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:53.344 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:53.344 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.344 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.344 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:53.344 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:53.344 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.344 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.344 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.344 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.344 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.344 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.344 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.344 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.605 00:17:53.605 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.605 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.605 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.866 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.866 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.866 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.866 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.866 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.866 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.866 { 00:17:53.866 "cntlid": 129, 00:17:53.866 "qid": 0, 00:17:53.866 "state": "enabled", 00:17:53.866 "thread": "nvmf_tgt_poll_group_000", 00:17:53.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:53.866 "listen_address": { 00:17:53.866 "trtype": "TCP", 00:17:53.866 "adrfam": "IPv4", 00:17:53.866 "traddr": "10.0.0.2", 00:17:53.866 "trsvcid": "4420" 00:17:53.866 }, 00:17:53.866 "peer_address": { 00:17:53.866 "trtype": "TCP", 00:17:53.866 "adrfam": "IPv4", 00:17:53.866 "traddr": "10.0.0.1", 00:17:53.866 "trsvcid": "55590" 00:17:53.866 }, 00:17:53.866 "auth": { 00:17:53.866 "state": "completed", 00:17:53.866 "digest": "sha512", 00:17:53.866 "dhgroup": "ffdhe6144" 00:17:53.866 } 00:17:53.866 } 00:17:53.866 ]' 00:17:53.866 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.866 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.866 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.866 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:53.866 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.866 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.866 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.866 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.126 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:17:54.126 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:17:54.697 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.697 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.697 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.697 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.697 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.697 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.697 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:54.697 14:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:54.958 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:54.958 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.958 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.958 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:54.958 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:54.958 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.958 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.958 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.958 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.958 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.958 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.958 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.958 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.219 00:17:55.219 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.219 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.219 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.479 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.479 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.479 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.479 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.479 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.479 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.479 { 00:17:55.479 "cntlid": 131, 00:17:55.479 "qid": 0, 00:17:55.479 "state": "enabled", 00:17:55.479 "thread": "nvmf_tgt_poll_group_000", 00:17:55.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:55.479 "listen_address": { 00:17:55.479 "trtype": "TCP", 00:17:55.479 "adrfam": "IPv4", 00:17:55.479 "traddr": "10.0.0.2", 00:17:55.479 "trsvcid": "4420" 00:17:55.479 }, 00:17:55.479 "peer_address": { 00:17:55.479 "trtype": "TCP", 00:17:55.479 "adrfam": "IPv4", 00:17:55.479 "traddr": "10.0.0.1", 00:17:55.479 "trsvcid": "55610" 00:17:55.479 }, 00:17:55.479 "auth": { 00:17:55.479 "state": "completed", 00:17:55.479 "digest": "sha512", 00:17:55.479 "dhgroup": "ffdhe6144" 00:17:55.479 } 00:17:55.479 } 00:17:55.479 ]' 00:17:55.479 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.479 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.479 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.479 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:55.479 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.739 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.739 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.739 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.739 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:17:55.739 14:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:17:56.682 14:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.682 14:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:56.682 14:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.682 14:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.682 14:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.682 14:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.682 14:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:56.682 14:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:56.682 14:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:56.682 14:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.682 14:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.682 14:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:56.682 14:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:56.682 14:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.682 14:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.682 14:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.682 14:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.682 14:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.682 14:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.682 14:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.682 14:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.944 00:17:56.944 14:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.944 14:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.944 14:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.205 14:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.205 14:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.205 14:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.205 14:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.205 14:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.205 14:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.205 { 00:17:57.205 "cntlid": 133, 00:17:57.205 "qid": 0, 00:17:57.205 "state": "enabled", 00:17:57.205 "thread": "nvmf_tgt_poll_group_000", 00:17:57.205 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:57.205 "listen_address": { 00:17:57.205 "trtype": "TCP", 00:17:57.205 "adrfam": "IPv4", 00:17:57.205 "traddr": "10.0.0.2", 00:17:57.205 "trsvcid": "4420" 00:17:57.205 }, 00:17:57.205 "peer_address": { 00:17:57.205 "trtype": "TCP", 00:17:57.205 "adrfam": "IPv4", 00:17:57.205 "traddr": "10.0.0.1", 00:17:57.205 "trsvcid": "55622" 00:17:57.205 }, 00:17:57.205 "auth": { 00:17:57.205 "state": "completed", 00:17:57.205 "digest": "sha512", 00:17:57.205 "dhgroup": "ffdhe6144" 00:17:57.205 } 00:17:57.205 } 00:17:57.205 ]' 00:17:57.205 14:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.205 14:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.205 14:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.205 14:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:57.205 14:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.466 14:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.466 14:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.466 14:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.466 14:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:17:57.466 14:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:17:58.408 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.408 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.408 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.408 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.408 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.408 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.408 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:58.408 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:58.408 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:58.408 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.408 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.408 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:58.408 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:58.408 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.408 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:58.409 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.409 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.409 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.409 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:58.409 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:58.409 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:58.670 00:17:58.670 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.670 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.670 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.931 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.931 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.931 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.931 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.931 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.931 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.931 { 00:17:58.931 "cntlid": 135, 00:17:58.931 "qid": 0, 00:17:58.931 "state": "enabled", 00:17:58.931 "thread": "nvmf_tgt_poll_group_000", 00:17:58.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:58.931 "listen_address": { 00:17:58.931 "trtype": "TCP", 00:17:58.931 "adrfam": "IPv4", 00:17:58.931 "traddr": "10.0.0.2", 00:17:58.931 "trsvcid": "4420" 00:17:58.931 }, 00:17:58.931 "peer_address": { 00:17:58.931 "trtype": "TCP", 00:17:58.931 "adrfam": "IPv4", 00:17:58.931 "traddr": "10.0.0.1", 00:17:58.931 "trsvcid": "55642" 00:17:58.931 }, 00:17:58.931 "auth": { 00:17:58.931 "state": "completed", 00:17:58.931 "digest": "sha512", 00:17:58.931 "dhgroup": "ffdhe6144" 00:17:58.931 } 00:17:58.931 } 00:17:58.931 ]' 00:17:58.931 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.931 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.931 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.931 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:58.931 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.191 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.191 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.191 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.191 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:17:59.191 14:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:17:59.761 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.022 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:00.022 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.022 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.022 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.022 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:00.022 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.022 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:00.022 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:00.022 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:00.022 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.022 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.022 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:00.022 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:00.022 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.022 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.022 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.022 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.022 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.022 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.022 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.022 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.594 00:18:00.594 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.594 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.594 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.855 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.855 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.855 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.855 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.855 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.855 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.855 { 00:18:00.855 "cntlid": 137, 00:18:00.855 "qid": 0, 00:18:00.855 "state": "enabled", 00:18:00.855 "thread": "nvmf_tgt_poll_group_000", 00:18:00.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:00.855 "listen_address": { 00:18:00.855 "trtype": "TCP", 00:18:00.855 "adrfam": "IPv4", 00:18:00.855 "traddr": "10.0.0.2", 00:18:00.855 "trsvcid": "4420" 00:18:00.855 }, 00:18:00.855 "peer_address": { 00:18:00.855 "trtype": "TCP", 00:18:00.855 "adrfam": "IPv4", 00:18:00.855 "traddr": "10.0.0.1", 00:18:00.855 "trsvcid": "53526" 00:18:00.855 }, 00:18:00.855 "auth": { 00:18:00.855 "state": "completed", 00:18:00.855 "digest": "sha512", 00:18:00.855 "dhgroup": "ffdhe8192" 00:18:00.855 } 00:18:00.855 } 00:18:00.855 ]' 00:18:00.855 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.855 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.855 14:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.855 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:00.855 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.855 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.855 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.855 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.116 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:18:01.116 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:18:01.686 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.686 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:01.686 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.686 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.686 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.686 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.686 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:01.686 14:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:01.947 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:01.947 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.947 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:01.947 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:01.947 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:01.947 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.947 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.947 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.947 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.947 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.947 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.947 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.947 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.519 00:18:02.519 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.519 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.519 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.519 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.519 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.519 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.519 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.519 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.519 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.519 { 00:18:02.519 "cntlid": 139, 00:18:02.519 "qid": 0, 00:18:02.519 "state": "enabled", 00:18:02.519 "thread": "nvmf_tgt_poll_group_000", 00:18:02.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:02.519 "listen_address": { 00:18:02.519 "trtype": "TCP", 00:18:02.519 "adrfam": "IPv4", 00:18:02.519 "traddr": "10.0.0.2", 00:18:02.519 "trsvcid": "4420" 00:18:02.519 }, 00:18:02.519 "peer_address": { 00:18:02.519 "trtype": "TCP", 00:18:02.519 "adrfam": "IPv4", 00:18:02.519 "traddr": "10.0.0.1", 00:18:02.519 "trsvcid": "53548" 00:18:02.519 }, 00:18:02.519 "auth": { 00:18:02.519 "state": "completed", 00:18:02.519 "digest": "sha512", 00:18:02.519 "dhgroup": "ffdhe8192" 00:18:02.519 } 00:18:02.519 } 00:18:02.519 ]' 00:18:02.519 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.519 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.519 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.780 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:02.780 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.780 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.780 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.780 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.041 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:18:03.041 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: --dhchap-ctrl-secret DHHC-1:02:ZjM5MzUyMWZmYThhZmY3N2I4NzhjY2FkZDQwZWI1Yzc5NDIwOWYwMmUwYjgwMjRlcR39dg==: 00:18:03.614 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.614 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.614 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.614 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.614 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.614 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.614 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:03.614 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:03.876 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:03.876 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.876 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:03.876 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:03.876 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:03.876 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.876 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.876 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.876 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.876 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.876 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.876 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.876 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.137 00:18:04.137 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.137 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.137 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.398 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.398 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.398 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.398 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.398 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.398 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.398 { 00:18:04.398 "cntlid": 141, 00:18:04.398 "qid": 0, 00:18:04.398 "state": "enabled", 00:18:04.398 "thread": "nvmf_tgt_poll_group_000", 00:18:04.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:04.398 "listen_address": { 00:18:04.398 "trtype": "TCP", 00:18:04.398 "adrfam": "IPv4", 00:18:04.398 "traddr": "10.0.0.2", 00:18:04.398 "trsvcid": "4420" 00:18:04.398 }, 00:18:04.398 "peer_address": { 00:18:04.398 "trtype": "TCP", 00:18:04.398 "adrfam": "IPv4", 00:18:04.398 "traddr": "10.0.0.1", 00:18:04.398 "trsvcid": "53580" 00:18:04.398 }, 00:18:04.398 "auth": { 00:18:04.398 "state": "completed", 00:18:04.398 "digest": "sha512", 00:18:04.398 "dhgroup": "ffdhe8192" 00:18:04.398 } 00:18:04.398 } 00:18:04.398 ]' 00:18:04.398 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.398 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.398 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.659 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:04.659 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.659 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.659 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.659 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.920 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:18:04.920 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:01:MjBkZDQ3MmM5Y2QzZTRiMTNmMTk2YjAwODc2OWI5MjDlstQU: 00:18:05.492 14:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.492 14:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.492 14:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.492 14:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.492 14:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.492 14:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.492 14:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:05.492 14:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:05.753 14:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:05.753 14:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.753 14:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.753 14:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:05.753 14:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:05.753 14:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.753 14:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:05.753 14:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.753 14:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.753 14:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.753 14:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:05.753 14:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:05.753 14:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.013 00:18:06.274 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.274 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.274 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.274 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.274 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.274 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.274 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.274 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.274 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.274 { 00:18:06.274 "cntlid": 143, 00:18:06.274 "qid": 0, 00:18:06.274 "state": "enabled", 00:18:06.274 "thread": "nvmf_tgt_poll_group_000", 00:18:06.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:06.274 "listen_address": { 00:18:06.274 "trtype": "TCP", 00:18:06.274 "adrfam": "IPv4", 00:18:06.274 "traddr": "10.0.0.2", 00:18:06.274 "trsvcid": "4420" 00:18:06.274 }, 00:18:06.274 "peer_address": { 00:18:06.274 "trtype": "TCP", 00:18:06.274 "adrfam": "IPv4", 00:18:06.274 "traddr": "10.0.0.1", 00:18:06.274 "trsvcid": "53620" 00:18:06.274 }, 00:18:06.274 "auth": { 00:18:06.274 "state": "completed", 00:18:06.274 "digest": "sha512", 00:18:06.274 "dhgroup": "ffdhe8192" 00:18:06.274 } 00:18:06.274 } 00:18:06.274 ]' 00:18:06.274 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.274 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.274 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.535 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:06.535 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.535 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.535 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.535 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.795 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:18:06.795 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:18:07.367 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.368 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.368 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.368 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.368 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.368 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:07.368 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:07.368 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:07.368 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:07.368 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:07.368 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:07.629 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:07.629 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.629 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.629 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:07.629 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:07.629 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.629 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.629 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.629 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.629 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.629 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.629 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.629 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.891 00:18:07.891 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.891 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.891 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.150 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.150 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.150 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.150 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.150 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.150 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.150 { 00:18:08.150 "cntlid": 145, 00:18:08.150 "qid": 0, 00:18:08.150 "state": "enabled", 00:18:08.150 "thread": "nvmf_tgt_poll_group_000", 00:18:08.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:08.150 "listen_address": { 00:18:08.150 "trtype": "TCP", 00:18:08.150 "adrfam": "IPv4", 00:18:08.150 "traddr": "10.0.0.2", 00:18:08.150 "trsvcid": "4420" 00:18:08.150 }, 00:18:08.150 "peer_address": { 00:18:08.150 "trtype": "TCP", 00:18:08.150 "adrfam": "IPv4", 00:18:08.150 "traddr": "10.0.0.1", 00:18:08.150 "trsvcid": "53638" 00:18:08.150 }, 00:18:08.150 "auth": { 00:18:08.150 "state": "completed", 00:18:08.150 "digest": "sha512", 00:18:08.150 "dhgroup": "ffdhe8192" 00:18:08.150 } 00:18:08.150 } 00:18:08.150 ]' 00:18:08.150 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.150 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.150 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.409 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:08.409 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.409 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.409 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.409 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.409 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:18:08.410 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTI1MGQ1MzRjYjI0ZGQ4NGJkMzIwYzMzNTAzNzI4YjYzY2Y2ZWNmNGMzYTEyMDdigT+BuQ==: --dhchap-ctrl-secret DHHC-1:03:YTY1MDVkNTkyMDM4OGRkNTMxMzk3N2FjYjEyYjJlYzE0MmYyYzA0NDE2MWQ3MTBkOWY2M2U0OWRiOGI4MGFkMhCzffE=: 00:18:09.349 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.349 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.349 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.349 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.349 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.349 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:09.349 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.349 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.349 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.349 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:09.349 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:09.349 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:09.349 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:09.349 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.349 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:09.349 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.349 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:09.349 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:09.349 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:09.611 request: 00:18:09.611 { 00:18:09.611 "name": "nvme0", 00:18:09.611 "trtype": "tcp", 00:18:09.611 "traddr": "10.0.0.2", 00:18:09.611 "adrfam": "ipv4", 00:18:09.611 "trsvcid": "4420", 00:18:09.611 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:09.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:09.611 "prchk_reftag": false, 00:18:09.611 "prchk_guard": false, 00:18:09.611 "hdgst": false, 00:18:09.611 "ddgst": false, 00:18:09.611 "dhchap_key": "key2", 00:18:09.611 "allow_unrecognized_csi": false, 00:18:09.611 "method": "bdev_nvme_attach_controller", 00:18:09.611 "req_id": 1 00:18:09.611 } 00:18:09.611 Got JSON-RPC error response 00:18:09.611 response: 00:18:09.611 { 00:18:09.611 "code": -5, 00:18:09.611 "message": "Input/output error" 00:18:09.611 } 00:18:09.611 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:09.611 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:09.611 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:09.611 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:09.611 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.611 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.611 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.611 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.611 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.611 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.611 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.611 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.611 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:09.611 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:09.611 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:09.611 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:09.611 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.611 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:09.611 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.611 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:09.611 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:09.611 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:10.184 request: 00:18:10.184 { 00:18:10.184 "name": "nvme0", 00:18:10.184 "trtype": "tcp", 00:18:10.184 "traddr": "10.0.0.2", 00:18:10.184 "adrfam": "ipv4", 00:18:10.184 "trsvcid": "4420", 00:18:10.184 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:10.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:10.184 "prchk_reftag": false, 00:18:10.184 "prchk_guard": false, 00:18:10.184 "hdgst": false, 00:18:10.184 "ddgst": false, 00:18:10.184 "dhchap_key": "key1", 00:18:10.184 "dhchap_ctrlr_key": "ckey2", 00:18:10.184 "allow_unrecognized_csi": false, 00:18:10.184 "method": "bdev_nvme_attach_controller", 00:18:10.184 "req_id": 1 00:18:10.184 } 00:18:10.184 Got JSON-RPC error response 00:18:10.184 response: 00:18:10.184 { 00:18:10.184 "code": -5, 00:18:10.184 "message": "Input/output error" 00:18:10.184 } 00:18:10.184 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:10.184 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:10.184 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:10.184 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:10.184 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.184 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.184 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.184 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.184 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:10.184 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.184 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.184 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.184 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.184 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:10.184 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.184 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:10.184 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.184 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:10.184 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.184 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.184 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.184 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.446 request: 00:18:10.446 { 00:18:10.446 "name": "nvme0", 00:18:10.446 "trtype": "tcp", 00:18:10.446 "traddr": "10.0.0.2", 00:18:10.446 "adrfam": "ipv4", 00:18:10.446 "trsvcid": "4420", 00:18:10.446 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:10.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:10.446 "prchk_reftag": false, 00:18:10.446 "prchk_guard": false, 00:18:10.446 "hdgst": false, 00:18:10.446 "ddgst": false, 00:18:10.446 "dhchap_key": "key1", 00:18:10.446 "dhchap_ctrlr_key": "ckey1", 00:18:10.446 "allow_unrecognized_csi": false, 00:18:10.446 "method": "bdev_nvme_attach_controller", 00:18:10.446 "req_id": 1 00:18:10.446 } 00:18:10.446 Got JSON-RPC error response 00:18:10.446 response: 00:18:10.446 { 00:18:10.446 "code": -5, 00:18:10.446 "message": "Input/output error" 00:18:10.446 } 00:18:10.446 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:10.446 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:10.446 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:10.446 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:10.446 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.446 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.446 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 996774 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 996774 ']' 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 996774 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 996774 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 996774' 00:18:10.707 killing process with pid 996774 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 996774 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 996774 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1022983 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1022983 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1022983 ']' 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.707 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.649 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.649 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:11.649 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:11.649 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:11.649 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.649 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.649 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:11.649 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1022983 00:18:11.649 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1022983 ']' 00:18:11.649 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.649 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.649 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.649 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.649 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.909 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.909 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:11.909 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:11.909 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.909 14:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.909 null0 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DbU 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.oOe ]] 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oOe 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.qye 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.b15 ]] 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.b15 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.fxf 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.ak4 ]] 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ak4 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.edW 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:11.909 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.852 nvme0n1 00:18:12.852 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.852 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.852 14:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.852 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.852 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.852 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.852 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.852 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.852 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.852 { 00:18:12.852 "cntlid": 1, 00:18:12.852 "qid": 0, 00:18:12.852 "state": "enabled", 00:18:12.852 "thread": "nvmf_tgt_poll_group_000", 00:18:12.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:12.852 "listen_address": { 00:18:12.852 "trtype": "TCP", 00:18:12.852 "adrfam": "IPv4", 00:18:12.852 "traddr": "10.0.0.2", 00:18:12.852 "trsvcid": "4420" 00:18:12.852 }, 00:18:12.852 "peer_address": { 00:18:12.852 "trtype": "TCP", 00:18:12.852 "adrfam": "IPv4", 00:18:12.852 "traddr": "10.0.0.1", 00:18:12.852 "trsvcid": "41814" 00:18:12.852 }, 00:18:12.852 "auth": { 00:18:12.852 "state": "completed", 00:18:12.852 "digest": "sha512", 00:18:12.852 "dhgroup": "ffdhe8192" 00:18:12.852 } 00:18:12.852 } 00:18:12.852 ]' 00:18:12.852 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.113 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.113 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.113 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:13.113 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.113 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.113 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.113 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.374 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:18:13.374 14:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:18:13.945 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.945 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.945 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.945 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.945 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.945 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:13.945 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.945 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.945 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.945 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:13.945 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:14.206 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:14.206 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:14.206 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:14.206 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:14.206 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.206 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:14.206 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.206 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:14.206 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.206 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.206 request: 00:18:14.206 { 00:18:14.206 "name": "nvme0", 00:18:14.206 "trtype": "tcp", 00:18:14.206 "traddr": "10.0.0.2", 00:18:14.206 "adrfam": "ipv4", 00:18:14.206 "trsvcid": "4420", 00:18:14.206 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:14.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:14.206 "prchk_reftag": false, 00:18:14.206 "prchk_guard": false, 00:18:14.206 "hdgst": false, 00:18:14.206 "ddgst": false, 00:18:14.206 "dhchap_key": "key3", 00:18:14.206 "allow_unrecognized_csi": false, 00:18:14.206 "method": "bdev_nvme_attach_controller", 00:18:14.206 "req_id": 1 00:18:14.206 } 00:18:14.206 Got JSON-RPC error response 00:18:14.206 response: 00:18:14.206 { 00:18:14.206 "code": -5, 00:18:14.206 "message": "Input/output error" 00:18:14.206 } 00:18:14.206 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:14.206 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.206 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:14.206 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.206 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:14.206 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:14.206 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:14.206 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:14.468 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:14.468 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:14.468 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:14.468 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:14.468 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.468 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:14.468 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.468 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:14.468 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.468 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.729 request: 00:18:14.729 { 00:18:14.729 "name": "nvme0", 00:18:14.729 "trtype": "tcp", 00:18:14.729 "traddr": "10.0.0.2", 00:18:14.729 "adrfam": "ipv4", 00:18:14.729 "trsvcid": "4420", 00:18:14.729 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:14.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:14.729 "prchk_reftag": false, 00:18:14.729 "prchk_guard": false, 00:18:14.729 "hdgst": false, 00:18:14.729 "ddgst": false, 00:18:14.729 "dhchap_key": "key3", 00:18:14.729 "allow_unrecognized_csi": false, 00:18:14.729 "method": "bdev_nvme_attach_controller", 00:18:14.729 "req_id": 1 00:18:14.729 } 00:18:14.729 Got JSON-RPC error response 00:18:14.729 response: 00:18:14.729 { 00:18:14.729 "code": -5, 00:18:14.729 "message": "Input/output error" 00:18:14.729 } 00:18:14.729 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:14.729 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.729 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:14.729 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.729 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:14.729 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:14.729 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:14.729 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:14.729 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:14.729 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:14.990 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:14.990 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.990 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.990 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.990 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:14.990 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.990 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.990 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.990 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:14.990 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:14.990 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:14.990 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:14.990 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.990 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:14.990 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.990 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:14.990 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:14.990 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:15.251 request: 00:18:15.251 { 00:18:15.251 "name": "nvme0", 00:18:15.251 "trtype": "tcp", 00:18:15.251 "traddr": "10.0.0.2", 00:18:15.251 "adrfam": "ipv4", 00:18:15.251 "trsvcid": "4420", 00:18:15.251 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:15.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:15.251 "prchk_reftag": false, 00:18:15.251 "prchk_guard": false, 00:18:15.251 "hdgst": false, 00:18:15.251 "ddgst": false, 00:18:15.251 "dhchap_key": "key0", 00:18:15.251 "dhchap_ctrlr_key": "key1", 00:18:15.251 "allow_unrecognized_csi": false, 00:18:15.251 "method": "bdev_nvme_attach_controller", 00:18:15.251 "req_id": 1 00:18:15.251 } 00:18:15.251 Got JSON-RPC error response 00:18:15.251 response: 00:18:15.251 { 00:18:15.251 "code": -5, 00:18:15.251 "message": "Input/output error" 00:18:15.251 } 00:18:15.251 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:15.251 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:15.251 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:15.251 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:15.251 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:15.251 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:15.251 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:15.512 nvme0n1 00:18:15.512 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:15.512 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:15.512 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.773 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.773 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.773 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.773 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:15.773 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.773 14:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.773 14:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.773 14:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:15.773 14:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:15.773 14:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:16.745 nvme0n1 00:18:16.745 14:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:16.745 14:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:16.746 14:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.746 14:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.746 14:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:16.746 14:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.746 14:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.746 14:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.746 14:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:16.746 14:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:16.746 14:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.007 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.007 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:18:17.007 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: --dhchap-ctrl-secret DHHC-1:03:MjUwMTkzMjMwM2VjZDYwZDMzMDliZTA5MTAzMzRlY2M5MGYxODYxMjgwZjc0YjFhOWE2ZTk2ZDA3ZjM1Y2Y4Zl9bMuI=: 00:18:17.579 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:17.579 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:17.579 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:17.579 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:17.579 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:17.579 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:17.579 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:17.579 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.579 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.840 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:17.840 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:17.840 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:17.840 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:17.840 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:17.840 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:17.840 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:17.840 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:17.840 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:17.840 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:18.101 request: 00:18:18.101 { 00:18:18.101 "name": "nvme0", 00:18:18.101 "trtype": "tcp", 00:18:18.101 "traddr": "10.0.0.2", 00:18:18.101 "adrfam": "ipv4", 00:18:18.101 "trsvcid": "4420", 00:18:18.101 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:18.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:18.101 "prchk_reftag": false, 00:18:18.101 "prchk_guard": false, 00:18:18.101 "hdgst": false, 00:18:18.101 "ddgst": false, 00:18:18.101 "dhchap_key": "key1", 00:18:18.101 "allow_unrecognized_csi": false, 00:18:18.101 "method": "bdev_nvme_attach_controller", 00:18:18.101 "req_id": 1 00:18:18.101 } 00:18:18.101 Got JSON-RPC error response 00:18:18.101 response: 00:18:18.101 { 00:18:18.101 "code": -5, 00:18:18.101 "message": "Input/output error" 00:18:18.101 } 00:18:18.101 14:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:18.101 14:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:18.101 14:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:18.101 14:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:18.101 14:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:18.101 14:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:18.101 14:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:19.044 nvme0n1 00:18:19.044 14:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:19.044 14:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:19.044 14:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.044 14:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.044 14:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.044 14:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.305 14:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.305 14:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.305 14:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.305 14:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.305 14:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:19.305 14:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:19.305 14:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:19.566 nvme0n1 00:18:19.566 14:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:19.566 14:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:19.566 14:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.827 14:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.827 14:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.827 14:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.827 14:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:19.827 14:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.827 14:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.827 14:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.827 14:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: '' 2s 00:18:19.827 14:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:19.827 14:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:19.827 14:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: 00:18:19.827 14:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:19.827 14:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:19.827 14:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:19.827 14:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: ]] 00:18:19.827 14:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MjRlMGUyMzg4ZjNjZTZlNzdjNzNhY2Y5MTU4YzdjNDehszRh: 00:18:19.827 14:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:19.827 14:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:19.827 14:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:22.372 14:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:22.372 14:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:22.372 14:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:22.372 14:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:22.372 14:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:22.372 14:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:22.372 14:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:22.372 14:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:22.372 14:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.372 14:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.372 14:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.372 14:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: 2s 00:18:22.372 14:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:22.372 14:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:22.372 14:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:22.372 14:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: 00:18:22.372 14:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:22.372 14:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:22.372 14:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:22.372 14:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: ]] 00:18:22.372 14:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MzNlZTc5NzdiZDg3MDgxNjQ2ODRhMGZkODRiZTFlNDNmOWM2NmY1NTRjNTNkOWQ4G9V6lA==: 00:18:22.372 14:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:22.372 14:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:24.285 14:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:24.285 14:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:24.286 14:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:24.286 14:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:24.286 14:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:24.286 14:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:24.286 14:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:24.286 14:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.286 14:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:24.286 14:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.286 14:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.286 14:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.286 14:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:24.286 14:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:24.286 14:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:24.857 nvme0n1 00:18:24.857 14:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:24.857 14:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.857 14:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.857 14:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.857 14:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:24.857 14:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:25.427 14:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:25.427 14:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:25.427 14:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.427 14:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.427 14:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.427 14:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.427 14:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.427 14:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.427 14:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:25.427 14:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:25.687 14:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:25.687 14:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:25.687 14:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.949 14:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.949 14:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:25.949 14:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.949 14:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.949 14:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.949 14:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:25.949 14:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:25.949 14:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:25.949 14:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:25.949 14:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.949 14:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:25.949 14:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.949 14:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:25.949 14:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:26.210 request: 00:18:26.210 { 00:18:26.210 "name": "nvme0", 00:18:26.210 "dhchap_key": "key1", 00:18:26.210 "dhchap_ctrlr_key": "key3", 00:18:26.210 "method": "bdev_nvme_set_keys", 00:18:26.210 "req_id": 1 00:18:26.210 } 00:18:26.210 Got JSON-RPC error response 00:18:26.210 response: 00:18:26.210 { 00:18:26.210 "code": -13, 00:18:26.210 "message": "Permission denied" 00:18:26.210 } 00:18:26.210 14:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:26.210 14:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:26.210 14:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:26.210 14:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:26.210 14:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:26.210 14:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:26.210 14:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.471 14:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:26.471 14:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:27.415 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:27.415 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:27.415 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.678 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:27.678 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:27.678 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.678 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.678 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.678 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:27.678 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:27.678 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:28.621 nvme0n1 00:18:28.621 14:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:28.621 14:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.621 14:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.621 14:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.621 14:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:28.621 14:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:28.621 14:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:28.621 14:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:28.622 14:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.622 14:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:28.622 14:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.622 14:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:28.622 14:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:28.882 request: 00:18:28.882 { 00:18:28.882 "name": "nvme0", 00:18:28.882 "dhchap_key": "key2", 00:18:28.882 "dhchap_ctrlr_key": "key0", 00:18:28.882 "method": "bdev_nvme_set_keys", 00:18:28.882 "req_id": 1 00:18:28.882 } 00:18:28.882 Got JSON-RPC error response 00:18:28.882 response: 00:18:28.882 { 00:18:28.882 "code": -13, 00:18:28.882 "message": "Permission denied" 00:18:28.882 } 00:18:28.882 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:28.882 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.882 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.882 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.882 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:28.882 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:28.882 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.143 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:29.143 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:30.084 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:30.084 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:30.084 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.344 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:30.344 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:30.344 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:30.344 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 997000 00:18:30.344 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 997000 ']' 00:18:30.344 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 997000 00:18:30.344 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:30.344 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.344 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 997000 00:18:30.344 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:30.344 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:30.344 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 997000' 00:18:30.344 killing process with pid 997000 00:18:30.344 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 997000 00:18:30.344 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 997000 00:18:30.604 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:30.604 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:30.604 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:30.604 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:30.604 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:30.604 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:30.604 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:30.604 rmmod nvme_tcp 00:18:30.604 rmmod nvme_fabrics 00:18:30.604 rmmod nvme_keyring 00:18:30.604 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:30.604 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:30.604 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:30.604 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1022983 ']' 00:18:30.604 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1022983 00:18:30.604 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1022983 ']' 00:18:30.604 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1022983 00:18:30.604 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:30.604 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.604 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1022983 00:18:30.604 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:30.604 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:30.604 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1022983' 00:18:30.604 killing process with pid 1022983 00:18:30.604 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1022983 00:18:30.604 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1022983 00:18:30.864 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:30.864 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:30.864 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:30.864 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:30.864 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:30.864 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:30.864 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:30.864 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:30.864 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:30.864 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.864 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:30.864 14:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.773 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:32.773 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.DbU /tmp/spdk.key-sha256.qye /tmp/spdk.key-sha384.fxf /tmp/spdk.key-sha512.edW /tmp/spdk.key-sha512.oOe /tmp/spdk.key-sha384.b15 /tmp/spdk.key-sha256.ak4 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:32.773 00:18:32.773 real 2m36.551s 00:18:32.773 user 5m52.406s 00:18:32.773 sys 0m24.601s 00:18:32.773 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:32.773 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.773 ************************************ 00:18:32.773 END TEST nvmf_auth_target 00:18:32.773 ************************************ 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:33.033 ************************************ 00:18:33.033 START TEST nvmf_bdevio_no_huge 00:18:33.033 ************************************ 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:33.033 * Looking for test storage... 00:18:33.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:33.033 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:33.294 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:33.294 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:33.294 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:33.294 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:33.294 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:33.294 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:33.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.294 --rc genhtml_branch_coverage=1 00:18:33.294 --rc genhtml_function_coverage=1 00:18:33.294 --rc genhtml_legend=1 00:18:33.294 --rc geninfo_all_blocks=1 00:18:33.294 --rc geninfo_unexecuted_blocks=1 00:18:33.294 00:18:33.294 ' 00:18:33.294 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:33.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.294 --rc genhtml_branch_coverage=1 00:18:33.295 --rc genhtml_function_coverage=1 00:18:33.295 --rc genhtml_legend=1 00:18:33.295 --rc geninfo_all_blocks=1 00:18:33.295 --rc geninfo_unexecuted_blocks=1 00:18:33.295 00:18:33.295 ' 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:33.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.295 --rc genhtml_branch_coverage=1 00:18:33.295 --rc genhtml_function_coverage=1 00:18:33.295 --rc genhtml_legend=1 00:18:33.295 --rc geninfo_all_blocks=1 00:18:33.295 --rc geninfo_unexecuted_blocks=1 00:18:33.295 00:18:33.295 ' 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:33.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.295 --rc genhtml_branch_coverage=1 00:18:33.295 --rc genhtml_function_coverage=1 00:18:33.295 --rc genhtml_legend=1 00:18:33.295 --rc geninfo_all_blocks=1 00:18:33.295 --rc geninfo_unexecuted_blocks=1 00:18:33.295 00:18:33.295 ' 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:33.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:33.295 14:05:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:41.694 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:41.694 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:41.694 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:41.694 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:41.694 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:41.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:41.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.699 ms 00:18:41.695 00:18:41.695 --- 10.0.0.2 ping statistics --- 00:18:41.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.695 rtt min/avg/max/mdev = 0.699/0.699/0.699/0.000 ms 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:41.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:41.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:18:41.695 00:18:41.695 --- 10.0.0.1 ping statistics --- 00:18:41.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.695 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1031186 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1031186 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1031186 ']' 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.695 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.695 [2024-10-30 14:05:38.950415] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:18:41.695 [2024-10-30 14:05:38.950485] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:41.695 [2024-10-30 14:05:39.059025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:41.695 [2024-10-30 14:05:39.119953] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.695 [2024-10-30 14:05:39.120002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.695 [2024-10-30 14:05:39.120010] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.695 [2024-10-30 14:05:39.120018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.695 [2024-10-30 14:05:39.120025] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.695 [2024-10-30 14:05:39.121531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:41.695 [2024-10-30 14:05:39.121688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:41.695 [2024-10-30 14:05:39.121816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:41.695 [2024-10-30 14:05:39.121816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.695 [2024-10-30 14:05:39.831718] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.695 Malloc0 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.695 [2024-10-30 14:05:39.885633] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:41.695 { 00:18:41.695 "params": { 00:18:41.695 "name": "Nvme$subsystem", 00:18:41.695 "trtype": "$TEST_TRANSPORT", 00:18:41.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.695 "adrfam": "ipv4", 00:18:41.695 "trsvcid": "$NVMF_PORT", 00:18:41.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.695 "hdgst": ${hdgst:-false}, 00:18:41.695 "ddgst": ${ddgst:-false} 00:18:41.695 }, 00:18:41.695 "method": "bdev_nvme_attach_controller" 00:18:41.695 } 00:18:41.695 EOF 00:18:41.695 )") 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:41.695 14:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:41.695 "params": { 00:18:41.695 "name": "Nvme1", 00:18:41.695 "trtype": "tcp", 00:18:41.695 "traddr": "10.0.0.2", 00:18:41.695 "adrfam": "ipv4", 00:18:41.695 "trsvcid": "4420", 00:18:41.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:41.695 "hdgst": false, 00:18:41.695 "ddgst": false 00:18:41.695 }, 00:18:41.695 "method": "bdev_nvme_attach_controller" 00:18:41.695 }' 00:18:41.695 [2024-10-30 14:05:39.943930] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:18:41.696 [2024-10-30 14:05:39.944000] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1031373 ] 00:18:41.956 [2024-10-30 14:05:40.043950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:41.956 [2024-10-30 14:05:40.106541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.956 [2024-10-30 14:05:40.106710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.956 [2024-10-30 14:05:40.106710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.217 I/O targets: 00:18:42.217 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:42.217 00:18:42.217 00:18:42.217 CUnit - A unit testing framework for C - Version 2.1-3 00:18:42.217 http://cunit.sourceforge.net/ 00:18:42.217 00:18:42.217 00:18:42.217 Suite: bdevio tests on: Nvme1n1 00:18:42.217 Test: blockdev write read block ...passed 00:18:42.478 Test: blockdev write zeroes read block ...passed 00:18:42.478 Test: blockdev write zeroes read no split ...passed 00:18:42.478 Test: blockdev write zeroes read split ...passed 00:18:42.478 Test: blockdev write zeroes read split partial ...passed 00:18:42.478 Test: blockdev reset ...[2024-10-30 14:05:40.600344] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:42.478 [2024-10-30 14:05:40.600455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x905a00 (9): Bad file descriptor 00:18:42.478 [2024-10-30 14:05:40.655861] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:42.478 passed 00:18:42.478 Test: blockdev write read 8 blocks ...passed 00:18:42.478 Test: blockdev write read size > 128k ...passed 00:18:42.478 Test: blockdev write read invalid size ...passed 00:18:42.478 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:42.478 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:42.478 Test: blockdev write read max offset ...passed 00:18:42.740 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:42.740 Test: blockdev writev readv 8 blocks ...passed 00:18:42.740 Test: blockdev writev readv 30 x 1block ...passed 00:18:42.740 Test: blockdev writev readv block ...passed 00:18:42.740 Test: blockdev writev readv size > 128k ...passed 00:18:42.740 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:42.740 Test: blockdev comparev and writev ...[2024-10-30 14:05:40.835293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.740 [2024-10-30 14:05:40.835342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.740 [2024-10-30 14:05:40.835359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.740 [2024-10-30 14:05:40.835369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:42.740 [2024-10-30 14:05:40.835857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.740 [2024-10-30 14:05:40.835872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:42.740 [2024-10-30 14:05:40.835886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.740 [2024-10-30 14:05:40.835895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:42.740 [2024-10-30 14:05:40.836333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.740 [2024-10-30 14:05:40.836347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:42.740 [2024-10-30 14:05:40.836361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.740 [2024-10-30 14:05:40.836369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:42.740 [2024-10-30 14:05:40.836821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.740 [2024-10-30 14:05:40.836834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:42.740 [2024-10-30 14:05:40.836847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.740 [2024-10-30 14:05:40.836856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:42.740 passed 00:18:42.740 Test: blockdev nvme passthru rw ...passed 00:18:42.740 Test: blockdev nvme passthru vendor specific ...[2024-10-30 14:05:40.921401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:42.740 [2024-10-30 14:05:40.921418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:42.740 [2024-10-30 14:05:40.921695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:42.740 [2024-10-30 14:05:40.921717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:42.740 [2024-10-30 14:05:40.921988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:42.740 [2024-10-30 14:05:40.922000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:42.740 [2024-10-30 14:05:40.922277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:42.740 [2024-10-30 14:05:40.922289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:42.740 passed 00:18:42.740 Test: blockdev nvme admin passthru ...passed 00:18:42.740 Test: blockdev copy ...passed 00:18:42.740 00:18:42.740 Run Summary: Type Total Ran Passed Failed Inactive 00:18:42.740 suites 1 1 n/a 0 0 00:18:42.740 tests 23 23 23 0 0 00:18:42.740 asserts 152 152 152 0 n/a 00:18:42.740 00:18:42.740 Elapsed time = 1.058 seconds 00:18:43.001 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:43.263 rmmod nvme_tcp 00:18:43.263 rmmod nvme_fabrics 00:18:43.263 rmmod nvme_keyring 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1031186 ']' 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1031186 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1031186 ']' 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1031186 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1031186 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1031186' 00:18:43.263 killing process with pid 1031186 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1031186 00:18:43.263 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1031186 00:18:43.524 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:43.524 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:43.524 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:43.524 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:43.524 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:43.524 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:43.524 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:43.524 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:43.524 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:43.524 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.524 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.524 14:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.072 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:46.072 00:18:46.072 real 0m12.764s 00:18:46.072 user 0m15.090s 00:18:46.072 sys 0m6.809s 00:18:46.072 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:46.072 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:46.072 ************************************ 00:18:46.072 END TEST nvmf_bdevio_no_huge 00:18:46.072 ************************************ 00:18:46.072 14:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:46.072 14:05:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:46.072 14:05:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:46.072 14:05:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:46.072 ************************************ 00:18:46.072 START TEST nvmf_tls 00:18:46.072 ************************************ 00:18:46.072 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:46.072 * Looking for test storage... 00:18:46.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:46.072 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:46.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.073 --rc genhtml_branch_coverage=1 00:18:46.073 --rc genhtml_function_coverage=1 00:18:46.073 --rc genhtml_legend=1 00:18:46.073 --rc geninfo_all_blocks=1 00:18:46.073 --rc geninfo_unexecuted_blocks=1 00:18:46.073 00:18:46.073 ' 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:46.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.073 --rc genhtml_branch_coverage=1 00:18:46.073 --rc genhtml_function_coverage=1 00:18:46.073 --rc genhtml_legend=1 00:18:46.073 --rc geninfo_all_blocks=1 00:18:46.073 --rc geninfo_unexecuted_blocks=1 00:18:46.073 00:18:46.073 ' 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:46.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.073 --rc genhtml_branch_coverage=1 00:18:46.073 --rc genhtml_function_coverage=1 00:18:46.073 --rc genhtml_legend=1 00:18:46.073 --rc geninfo_all_blocks=1 00:18:46.073 --rc geninfo_unexecuted_blocks=1 00:18:46.073 00:18:46.073 ' 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:46.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.073 --rc genhtml_branch_coverage=1 00:18:46.073 --rc genhtml_function_coverage=1 00:18:46.073 --rc genhtml_legend=1 00:18:46.073 --rc geninfo_all_blocks=1 00:18:46.073 --rc geninfo_unexecuted_blocks=1 00:18:46.073 00:18:46.073 ' 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:46.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:46.073 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.221 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:54.221 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:54.221 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:54.221 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:54.221 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:54.222 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:54.222 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:54.222 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:54.222 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:54.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:54.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:18:54.222 00:18:54.222 --- 10.0.0.2 ping statistics --- 00:18:54.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.222 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:54.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:54.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:18:54.222 00:18:54.222 --- 10.0.0.1 ping statistics --- 00:18:54.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.222 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1036018 00:18:54.222 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1036018 00:18:54.223 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:54.223 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1036018 ']' 00:18:54.223 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.223 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:54.223 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.223 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:54.223 14:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.223 [2024-10-30 14:05:51.817079] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:18:54.223 [2024-10-30 14:05:51.817146] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.223 [2024-10-30 14:05:51.917064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.223 [2024-10-30 14:05:51.967564] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:54.223 [2024-10-30 14:05:51.967613] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:54.223 [2024-10-30 14:05:51.967623] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:54.223 [2024-10-30 14:05:51.967635] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:54.223 [2024-10-30 14:05:51.967641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:54.223 [2024-10-30 14:05:51.968425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.483 14:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.483 14:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:54.483 14:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:54.483 14:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:54.483 14:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.483 14:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.483 14:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:54.483 14:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:54.744 true 00:18:54.744 14:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:54.744 14:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:54.744 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:54.744 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:54.744 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:55.005 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:55.005 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:55.266 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:55.266 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:55.266 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:55.527 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:55.527 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:55.527 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:55.527 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:55.527 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:55.527 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:55.787 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:55.787 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:55.787 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:56.048 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:56.048 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:56.048 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:56.048 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:56.048 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:56.308 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:56.308 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.0lxtGtfq34 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.Ha9A2hzU38 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.0lxtGtfq34 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.Ha9A2hzU38 00:18:56.569 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:56.830 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:56.830 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.0lxtGtfq34 00:18:56.830 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.0lxtGtfq34 00:18:56.830 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:57.090 [2024-10-30 14:05:55.277911] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.090 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:57.351 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:57.351 [2024-10-30 14:05:55.598703] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:57.351 [2024-10-30 14:05:55.598886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:57.351 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:57.611 malloc0 00:18:57.611 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:57.871 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.0lxtGtfq34 00:18:57.871 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:58.132 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.0lxtGtfq34 00:19:08.125 Initializing NVMe Controllers 00:19:08.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:08.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:08.125 Initialization complete. Launching workers. 00:19:08.125 ======================================================== 00:19:08.125 Latency(us) 00:19:08.125 Device Information : IOPS MiB/s Average min max 00:19:08.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18749.17 73.24 3413.68 1164.82 4133.80 00:19:08.125 ======================================================== 00:19:08.125 Total : 18749.17 73.24 3413.68 1164.82 4133.80 00:19:08.125 00:19:08.125 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0lxtGtfq34 00:19:08.125 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:08.125 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:08.125 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:08.125 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0lxtGtfq34 00:19:08.125 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:08.125 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1038882 00:19:08.125 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:08.125 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1038882 /var/tmp/bdevperf.sock 00:19:08.125 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:08.125 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1038882 ']' 00:19:08.125 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:08.125 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.125 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:08.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:08.125 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.125 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.386 [2024-10-30 14:06:06.428692] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:19:08.386 [2024-10-30 14:06:06.428756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1038882 ] 00:19:08.386 [2024-10-30 14:06:06.515508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.386 [2024-10-30 14:06:06.550711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.957 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.957 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:08.957 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0lxtGtfq34 00:19:09.218 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:09.478 [2024-10-30 14:06:07.530119] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:09.478 TLSTESTn1 00:19:09.478 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:09.478 Running I/O for 10 seconds... 00:19:11.800 4300.00 IOPS, 16.80 MiB/s [2024-10-30T13:06:11.037Z] 4515.00 IOPS, 17.64 MiB/s [2024-10-30T13:06:11.976Z] 4746.33 IOPS, 18.54 MiB/s [2024-10-30T13:06:12.920Z] 5065.50 IOPS, 19.79 MiB/s [2024-10-30T13:06:13.860Z] 5362.80 IOPS, 20.95 MiB/s [2024-10-30T13:06:14.802Z] 5264.83 IOPS, 20.57 MiB/s [2024-10-30T13:06:15.746Z] 5267.43 IOPS, 20.58 MiB/s [2024-10-30T13:06:17.129Z] 5386.12 IOPS, 21.04 MiB/s [2024-10-30T13:06:18.072Z] 5454.00 IOPS, 21.30 MiB/s [2024-10-30T13:06:18.072Z] 5369.60 IOPS, 20.98 MiB/s 00:19:19.773 Latency(us) 00:19:19.773 [2024-10-30T13:06:18.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.773 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:19.773 Verification LBA range: start 0x0 length 0x2000 00:19:19.773 TLSTESTn1 : 10.02 5371.00 20.98 0.00 0.00 23790.53 5870.93 83886.08 00:19:19.773 [2024-10-30T13:06:18.072Z] =================================================================================================================== 00:19:19.773 [2024-10-30T13:06:18.072Z] Total : 5371.00 20.98 0.00 0.00 23790.53 5870.93 83886.08 00:19:19.773 { 00:19:19.773 "results": [ 00:19:19.773 { 00:19:19.773 "job": "TLSTESTn1", 00:19:19.773 "core_mask": "0x4", 00:19:19.773 "workload": "verify", 00:19:19.773 "status": "finished", 00:19:19.773 "verify_range": { 00:19:19.773 "start": 0, 00:19:19.773 "length": 8192 00:19:19.773 }, 00:19:19.773 "queue_depth": 128, 00:19:19.773 "io_size": 4096, 00:19:19.773 "runtime": 10.021031, 00:19:19.773 "iops": 5371.004240980793, 00:19:19.773 "mibps": 20.980485316331222, 00:19:19.773 "io_failed": 0, 00:19:19.773 "io_timeout": 0, 00:19:19.773 "avg_latency_us": 23790.52758659557, 00:19:19.773 "min_latency_us": 5870.933333333333, 00:19:19.773 "max_latency_us": 83886.08 00:19:19.773 } 00:19:19.773 ], 00:19:19.773 "core_count": 1 00:19:19.773 } 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1038882 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1038882 ']' 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1038882 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1038882 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1038882' 00:19:19.773 killing process with pid 1038882 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1038882 00:19:19.773 Received shutdown signal, test time was about 10.000000 seconds 00:19:19.773 00:19:19.773 Latency(us) 00:19:19.773 [2024-10-30T13:06:18.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.773 [2024-10-30T13:06:18.072Z] =================================================================================================================== 00:19:19.773 [2024-10-30T13:06:18.072Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1038882 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ha9A2hzU38 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ha9A2hzU38 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ha9A2hzU38 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Ha9A2hzU38 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1041684 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1041684 /var/tmp/bdevperf.sock 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1041684 ']' 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.773 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.773 [2024-10-30 14:06:17.994795] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:19:19.773 [2024-10-30 14:06:17.994853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041684 ] 00:19:20.034 [2024-10-30 14:06:18.078273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.034 [2024-10-30 14:06:18.106240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.605 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.605 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:20.605 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ha9A2hzU38 00:19:20.866 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:20.867 [2024-10-30 14:06:19.080489] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:20.867 [2024-10-30 14:06:19.090350] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:20.867 [2024-10-30 14:06:19.090532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11dddb0 (107): Transport endpoint is not connected 00:19:20.867 [2024-10-30 14:06:19.091525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11dddb0 (9): Bad file descriptor 00:19:20.867 [2024-10-30 14:06:19.092527] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:20.867 [2024-10-30 14:06:19.092535] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:20.867 [2024-10-30 14:06:19.092541] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:20.867 [2024-10-30 14:06:19.092549] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:20.867 request: 00:19:20.867 { 00:19:20.867 "name": "TLSTEST", 00:19:20.867 "trtype": "tcp", 00:19:20.867 "traddr": "10.0.0.2", 00:19:20.867 "adrfam": "ipv4", 00:19:20.867 "trsvcid": "4420", 00:19:20.867 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.867 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:20.867 "prchk_reftag": false, 00:19:20.867 "prchk_guard": false, 00:19:20.867 "hdgst": false, 00:19:20.867 "ddgst": false, 00:19:20.867 "psk": "key0", 00:19:20.867 "allow_unrecognized_csi": false, 00:19:20.867 "method": "bdev_nvme_attach_controller", 00:19:20.867 "req_id": 1 00:19:20.867 } 00:19:20.867 Got JSON-RPC error response 00:19:20.867 response: 00:19:20.867 { 00:19:20.867 "code": -5, 00:19:20.867 "message": "Input/output error" 00:19:20.867 } 00:19:20.867 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1041684 00:19:20.867 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1041684 ']' 00:19:20.867 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1041684 00:19:20.867 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:20.867 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.867 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1041684 00:19:21.127 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:21.127 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:21.127 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1041684' 00:19:21.128 killing process with pid 1041684 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1041684 00:19:21.128 Received shutdown signal, test time was about 10.000000 seconds 00:19:21.128 00:19:21.128 Latency(us) 00:19:21.128 [2024-10-30T13:06:19.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.128 [2024-10-30T13:06:19.427Z] =================================================================================================================== 00:19:21.128 [2024-10-30T13:06:19.427Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1041684 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.0lxtGtfq34 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.0lxtGtfq34 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.0lxtGtfq34 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0lxtGtfq34 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1041879 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1041879 /var/tmp/bdevperf.sock 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1041879 ']' 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:21.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.128 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.128 [2024-10-30 14:06:19.334768] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:19:21.128 [2024-10-30 14:06:19.334839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041879 ] 00:19:21.128 [2024-10-30 14:06:19.419093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.388 [2024-10-30 14:06:19.447773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.957 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.957 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:21.957 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0lxtGtfq34 00:19:22.217 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:22.217 [2024-10-30 14:06:20.446114] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:22.217 [2024-10-30 14:06:20.453576] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:22.217 [2024-10-30 14:06:20.453595] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:22.217 [2024-10-30 14:06:20.453617] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:22.217 [2024-10-30 14:06:20.454263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4bdb0 (107): Transport endpoint is not connected 00:19:22.217 [2024-10-30 14:06:20.455258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4bdb0 (9): Bad file descriptor 00:19:22.217 [2024-10-30 14:06:20.456260] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:22.217 [2024-10-30 14:06:20.456267] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:22.217 [2024-10-30 14:06:20.456273] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:22.217 [2024-10-30 14:06:20.456281] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:22.217 request: 00:19:22.217 { 00:19:22.217 "name": "TLSTEST", 00:19:22.217 "trtype": "tcp", 00:19:22.217 "traddr": "10.0.0.2", 00:19:22.217 "adrfam": "ipv4", 00:19:22.217 "trsvcid": "4420", 00:19:22.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.217 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:22.217 "prchk_reftag": false, 00:19:22.217 "prchk_guard": false, 00:19:22.217 "hdgst": false, 00:19:22.217 "ddgst": false, 00:19:22.217 "psk": "key0", 00:19:22.217 "allow_unrecognized_csi": false, 00:19:22.217 "method": "bdev_nvme_attach_controller", 00:19:22.217 "req_id": 1 00:19:22.217 } 00:19:22.217 Got JSON-RPC error response 00:19:22.217 response: 00:19:22.217 { 00:19:22.217 "code": -5, 00:19:22.217 "message": "Input/output error" 00:19:22.217 } 00:19:22.217 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1041879 00:19:22.217 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1041879 ']' 00:19:22.217 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1041879 00:19:22.217 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:22.217 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.217 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1041879 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1041879' 00:19:22.477 killing process with pid 1041879 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1041879 00:19:22.477 Received shutdown signal, test time was about 10.000000 seconds 00:19:22.477 00:19:22.477 Latency(us) 00:19:22.477 [2024-10-30T13:06:20.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.477 [2024-10-30T13:06:20.776Z] =================================================================================================================== 00:19:22.477 [2024-10-30T13:06:20.776Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1041879 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.0lxtGtfq34 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.0lxtGtfq34 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.0lxtGtfq34 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0lxtGtfq34 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1042067 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1042067 /var/tmp/bdevperf.sock 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1042067 ']' 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:22.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.477 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.477 [2024-10-30 14:06:20.698457] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:19:22.477 [2024-10-30 14:06:20.698517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1042067 ] 00:19:22.737 [2024-10-30 14:06:20.784838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.737 [2024-10-30 14:06:20.813101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.308 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.308 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:23.308 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0lxtGtfq34 00:19:23.568 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:23.568 [2024-10-30 14:06:21.823541] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:23.568 [2024-10-30 14:06:21.827907] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:23.568 [2024-10-30 14:06:21.827925] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:23.568 [2024-10-30 14:06:21.827944] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:23.568 [2024-10-30 14:06:21.828591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61fdb0 (107): Transport endpoint is not connected 00:19:23.568 [2024-10-30 14:06:21.829586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61fdb0 (9): Bad file descriptor 00:19:23.568 [2024-10-30 14:06:21.830589] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:23.568 [2024-10-30 14:06:21.830596] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:23.568 [2024-10-30 14:06:21.830602] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:23.568 [2024-10-30 14:06:21.830609] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:23.568 request: 00:19:23.568 { 00:19:23.568 "name": "TLSTEST", 00:19:23.568 "trtype": "tcp", 00:19:23.568 "traddr": "10.0.0.2", 00:19:23.568 "adrfam": "ipv4", 00:19:23.568 "trsvcid": "4420", 00:19:23.568 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:23.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:23.568 "prchk_reftag": false, 00:19:23.568 "prchk_guard": false, 00:19:23.568 "hdgst": false, 00:19:23.568 "ddgst": false, 00:19:23.568 "psk": "key0", 00:19:23.568 "allow_unrecognized_csi": false, 00:19:23.568 "method": "bdev_nvme_attach_controller", 00:19:23.568 "req_id": 1 00:19:23.568 } 00:19:23.568 Got JSON-RPC error response 00:19:23.568 response: 00:19:23.568 { 00:19:23.568 "code": -5, 00:19:23.568 "message": "Input/output error" 00:19:23.568 } 00:19:23.568 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1042067 00:19:23.568 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1042067 ']' 00:19:23.568 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1042067 00:19:23.568 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:23.568 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.568 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1042067 00:19:23.829 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:23.829 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:23.829 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1042067' 00:19:23.829 killing process with pid 1042067 00:19:23.829 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1042067 00:19:23.829 Received shutdown signal, test time was about 10.000000 seconds 00:19:23.829 00:19:23.829 Latency(us) 00:19:23.829 [2024-10-30T13:06:22.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.829 [2024-10-30T13:06:22.128Z] =================================================================================================================== 00:19:23.829 [2024-10-30T13:06:22.128Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:23.829 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1042067 00:19:23.829 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:23.829 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:23.829 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:23.829 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:23.829 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:23.829 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:23.830 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:23.830 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:23.830 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:23.830 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.830 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:23.830 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.830 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:23.830 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:23.830 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:23.830 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:23.830 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:23.830 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:23.830 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1042397 00:19:23.830 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:23.830 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1042397 /var/tmp/bdevperf.sock 00:19:23.830 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:23.830 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1042397 ']' 00:19:23.830 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:23.830 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.830 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:23.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:23.830 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.830 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.830 [2024-10-30 14:06:22.071858] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:19:23.830 [2024-10-30 14:06:22.071913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1042397 ] 00:19:24.090 [2024-10-30 14:06:22.155439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.090 [2024-10-30 14:06:22.184046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.661 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.661 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:24.661 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:24.921 [2024-10-30 14:06:23.013824] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:24.921 [2024-10-30 14:06:23.013850] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:24.921 request: 00:19:24.921 { 00:19:24.921 "name": "key0", 00:19:24.921 "path": "", 00:19:24.921 "method": "keyring_file_add_key", 00:19:24.921 "req_id": 1 00:19:24.921 } 00:19:24.921 Got JSON-RPC error response 00:19:24.921 response: 00:19:24.921 { 00:19:24.921 "code": -1, 00:19:24.921 "message": "Operation not permitted" 00:19:24.921 } 00:19:24.921 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:24.921 [2024-10-30 14:06:23.190359] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:24.921 [2024-10-30 14:06:23.190382] bdev_nvme.c:6529:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:24.921 request: 00:19:24.921 { 00:19:24.921 "name": "TLSTEST", 00:19:24.921 "trtype": "tcp", 00:19:24.921 "traddr": "10.0.0.2", 00:19:24.921 "adrfam": "ipv4", 00:19:24.921 "trsvcid": "4420", 00:19:24.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:24.921 "prchk_reftag": false, 00:19:24.921 "prchk_guard": false, 00:19:24.921 "hdgst": false, 00:19:24.921 "ddgst": false, 00:19:24.921 "psk": "key0", 00:19:24.921 "allow_unrecognized_csi": false, 00:19:24.921 "method": "bdev_nvme_attach_controller", 00:19:24.921 "req_id": 1 00:19:24.921 } 00:19:24.921 Got JSON-RPC error response 00:19:24.921 response: 00:19:24.921 { 00:19:24.921 "code": -126, 00:19:24.921 "message": "Required key not available" 00:19:24.921 } 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1042397 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1042397 ']' 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1042397 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1042397 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1042397' 00:19:25.182 killing process with pid 1042397 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1042397 00:19:25.182 Received shutdown signal, test time was about 10.000000 seconds 00:19:25.182 00:19:25.182 Latency(us) 00:19:25.182 [2024-10-30T13:06:23.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.182 [2024-10-30T13:06:23.481Z] =================================================================================================================== 00:19:25.182 [2024-10-30T13:06:23.481Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1042397 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1036018 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1036018 ']' 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1036018 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1036018 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1036018' 00:19:25.182 killing process with pid 1036018 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1036018 00:19:25.182 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1036018 00:19:25.443 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:25.443 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:25.443 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:25.443 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:25.443 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:25.443 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:25.443 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:25.444 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:25.444 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:25.444 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.EVxctdrZzx 00:19:25.444 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:25.444 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.EVxctdrZzx 00:19:25.444 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:25.444 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:25.444 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:25.444 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.444 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1042745 00:19:25.444 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1042745 00:19:25.444 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:25.444 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1042745 ']' 00:19:25.444 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.444 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.444 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.444 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.444 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.444 [2024-10-30 14:06:23.675315] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:19:25.444 [2024-10-30 14:06:23.675373] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.704 [2024-10-30 14:06:23.768874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.704 [2024-10-30 14:06:23.798387] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.704 [2024-10-30 14:06:23.798419] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.704 [2024-10-30 14:06:23.798424] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.704 [2024-10-30 14:06:23.798429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.704 [2024-10-30 14:06:23.798434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.704 [2024-10-30 14:06:23.798912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.274 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.274 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:26.274 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:26.274 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:26.274 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.275 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.275 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.EVxctdrZzx 00:19:26.275 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.EVxctdrZzx 00:19:26.275 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:26.536 [2024-10-30 14:06:24.663335] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.536 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:26.796 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:26.796 [2024-10-30 14:06:24.988140] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:26.796 [2024-10-30 14:06:24.988321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.796 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:27.056 malloc0 00:19:27.056 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:27.056 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.EVxctdrZzx 00:19:27.316 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:27.316 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EVxctdrZzx 00:19:27.316 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:27.316 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:27.316 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:27.316 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.EVxctdrZzx 00:19:27.316 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:27.316 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1043111 00:19:27.316 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:27.316 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1043111 /var/tmp/bdevperf.sock 00:19:27.316 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:27.316 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1043111 ']' 00:19:27.316 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.316 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.316 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.317 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.317 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.577 [2024-10-30 14:06:25.664013] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:19:27.577 [2024-10-30 14:06:25.664067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043111 ] 00:19:27.577 [2024-10-30 14:06:25.745389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.577 [2024-10-30 14:06:25.774339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.148 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.148 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:28.148 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EVxctdrZzx 00:19:28.409 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:28.669 [2024-10-30 14:06:26.740585] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:28.670 TLSTESTn1 00:19:28.670 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:28.670 Running I/O for 10 seconds... 00:19:30.990 5291.00 IOPS, 20.67 MiB/s [2024-10-30T13:06:30.228Z] 5444.50 IOPS, 21.27 MiB/s [2024-10-30T13:06:31.168Z] 5189.33 IOPS, 20.27 MiB/s [2024-10-30T13:06:32.108Z] 5298.25 IOPS, 20.70 MiB/s [2024-10-30T13:06:33.048Z] 5300.60 IOPS, 20.71 MiB/s [2024-10-30T13:06:33.989Z] 5214.83 IOPS, 20.37 MiB/s [2024-10-30T13:06:35.372Z] 5249.43 IOPS, 20.51 MiB/s [2024-10-30T13:06:36.312Z] 5355.38 IOPS, 20.92 MiB/s [2024-10-30T13:06:37.254Z] 5425.33 IOPS, 21.19 MiB/s [2024-10-30T13:06:37.254Z] 5407.20 IOPS, 21.12 MiB/s 00:19:38.955 Latency(us) 00:19:38.955 [2024-10-30T13:06:37.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.955 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:38.955 Verification LBA range: start 0x0 length 0x2000 00:19:38.955 TLSTESTn1 : 10.02 5408.01 21.13 0.00 0.00 23629.90 5106.35 31894.19 00:19:38.955 [2024-10-30T13:06:37.254Z] =================================================================================================================== 00:19:38.955 [2024-10-30T13:06:37.254Z] Total : 5408.01 21.13 0.00 0.00 23629.90 5106.35 31894.19 00:19:38.955 { 00:19:38.955 "results": [ 00:19:38.955 { 00:19:38.955 "job": "TLSTESTn1", 00:19:38.955 "core_mask": "0x4", 00:19:38.955 "workload": "verify", 00:19:38.955 "status": "finished", 00:19:38.955 "verify_range": { 00:19:38.955 "start": 0, 00:19:38.955 "length": 8192 00:19:38.955 }, 00:19:38.955 "queue_depth": 128, 00:19:38.955 "io_size": 4096, 00:19:38.955 "runtime": 10.02198, 00:19:38.955 "iops": 5408.013187014941, 00:19:38.955 "mibps": 21.125051511777112, 00:19:38.955 "io_failed": 0, 00:19:38.955 "io_timeout": 0, 00:19:38.955 "avg_latency_us": 23629.904032177714, 00:19:38.955 "min_latency_us": 5106.346666666666, 00:19:38.955 "max_latency_us": 31894.18666666667 00:19:38.955 } 00:19:38.955 ], 00:19:38.955 "core_count": 1 00:19:38.955 } 00:19:38.955 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:38.955 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1043111 00:19:38.955 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1043111 ']' 00:19:38.955 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1043111 00:19:38.955 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1043111 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1043111' 00:19:38.955 killing process with pid 1043111 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1043111 00:19:38.955 Received shutdown signal, test time was about 10.000000 seconds 00:19:38.955 00:19:38.955 Latency(us) 00:19:38.955 [2024-10-30T13:06:37.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.955 [2024-10-30T13:06:37.254Z] =================================================================================================================== 00:19:38.955 [2024-10-30T13:06:37.254Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1043111 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.EVxctdrZzx 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EVxctdrZzx 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EVxctdrZzx 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EVxctdrZzx 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.EVxctdrZzx 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1045463 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1045463 /var/tmp/bdevperf.sock 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1045463 ']' 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:38.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:38.955 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.955 [2024-10-30 14:06:37.222594] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:19:38.955 [2024-10-30 14:06:37.222649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1045463 ] 00:19:39.216 [2024-10-30 14:06:37.306024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.216 [2024-10-30 14:06:37.334593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.787 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.787 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:39.787 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EVxctdrZzx 00:19:40.048 [2024-10-30 14:06:38.176563] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.EVxctdrZzx': 0100666 00:19:40.048 [2024-10-30 14:06:38.176591] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:40.048 request: 00:19:40.048 { 00:19:40.048 "name": "key0", 00:19:40.048 "path": "/tmp/tmp.EVxctdrZzx", 00:19:40.048 "method": "keyring_file_add_key", 00:19:40.048 "req_id": 1 00:19:40.048 } 00:19:40.048 Got JSON-RPC error response 00:19:40.048 response: 00:19:40.048 { 00:19:40.048 "code": -1, 00:19:40.048 "message": "Operation not permitted" 00:19:40.048 } 00:19:40.048 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:40.311 [2024-10-30 14:06:38.365108] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:40.311 [2024-10-30 14:06:38.365132] bdev_nvme.c:6529:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:40.311 request: 00:19:40.311 { 00:19:40.311 "name": "TLSTEST", 00:19:40.311 "trtype": "tcp", 00:19:40.311 "traddr": "10.0.0.2", 00:19:40.311 "adrfam": "ipv4", 00:19:40.311 "trsvcid": "4420", 00:19:40.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:40.311 "prchk_reftag": false, 00:19:40.311 "prchk_guard": false, 00:19:40.311 "hdgst": false, 00:19:40.311 "ddgst": false, 00:19:40.311 "psk": "key0", 00:19:40.311 "allow_unrecognized_csi": false, 00:19:40.311 "method": "bdev_nvme_attach_controller", 00:19:40.311 "req_id": 1 00:19:40.311 } 00:19:40.311 Got JSON-RPC error response 00:19:40.311 response: 00:19:40.311 { 00:19:40.311 "code": -126, 00:19:40.311 "message": "Required key not available" 00:19:40.311 } 00:19:40.311 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1045463 00:19:40.311 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1045463 ']' 00:19:40.311 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1045463 00:19:40.311 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:40.311 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.311 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1045463 00:19:40.311 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:40.311 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:40.311 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1045463' 00:19:40.311 killing process with pid 1045463 00:19:40.311 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1045463 00:19:40.311 Received shutdown signal, test time was about 10.000000 seconds 00:19:40.311 00:19:40.311 Latency(us) 00:19:40.311 [2024-10-30T13:06:38.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.311 [2024-10-30T13:06:38.610Z] =================================================================================================================== 00:19:40.311 [2024-10-30T13:06:38.610Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:40.311 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1045463 00:19:40.311 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:40.311 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:40.311 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:40.311 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:40.311 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:40.311 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1042745 00:19:40.311 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1042745 ']' 00:19:40.311 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1042745 00:19:40.311 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:40.311 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.311 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1042745 00:19:40.572 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:40.572 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:40.572 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1042745' 00:19:40.572 killing process with pid 1042745 00:19:40.572 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1042745 00:19:40.572 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1042745 00:19:40.572 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:40.572 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:40.572 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:40.572 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.572 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1045813 00:19:40.572 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1045813 00:19:40.572 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:40.572 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1045813 ']' 00:19:40.572 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.572 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.572 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.572 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.572 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.572 [2024-10-30 14:06:38.789071] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:19:40.572 [2024-10-30 14:06:38.789124] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.833 [2024-10-30 14:06:38.880946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.833 [2024-10-30 14:06:38.908974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.833 [2024-10-30 14:06:38.909001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.833 [2024-10-30 14:06:38.909007] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.833 [2024-10-30 14:06:38.909012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.833 [2024-10-30 14:06:38.909016] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.833 [2024-10-30 14:06:38.909467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.403 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:41.403 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:41.403 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:41.403 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:41.403 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.403 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.403 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.EVxctdrZzx 00:19:41.403 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:41.403 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.EVxctdrZzx 00:19:41.403 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:41.403 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.403 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:41.403 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.403 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.EVxctdrZzx 00:19:41.403 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.EVxctdrZzx 00:19:41.403 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:41.664 [2024-10-30 14:06:39.780561] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.664 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:41.924 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:41.924 [2024-10-30 14:06:40.141470] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:41.924 [2024-10-30 14:06:40.141660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.924 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:42.184 malloc0 00:19:42.184 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:42.445 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.EVxctdrZzx 00:19:42.445 [2024-10-30 14:06:40.680584] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.EVxctdrZzx': 0100666 00:19:42.445 [2024-10-30 14:06:40.680608] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:42.445 request: 00:19:42.445 { 00:19:42.445 "name": "key0", 00:19:42.445 "path": "/tmp/tmp.EVxctdrZzx", 00:19:42.445 "method": "keyring_file_add_key", 00:19:42.445 "req_id": 1 00:19:42.445 } 00:19:42.445 Got JSON-RPC error response 00:19:42.445 response: 00:19:42.445 { 00:19:42.445 "code": -1, 00:19:42.445 "message": "Operation not permitted" 00:19:42.445 } 00:19:42.445 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:42.706 [2024-10-30 14:06:40.853034] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:42.706 [2024-10-30 14:06:40.853062] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:42.706 request: 00:19:42.706 { 00:19:42.706 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.706 "host": "nqn.2016-06.io.spdk:host1", 00:19:42.706 "psk": "key0", 00:19:42.706 "method": "nvmf_subsystem_add_host", 00:19:42.706 "req_id": 1 00:19:42.706 } 00:19:42.706 Got JSON-RPC error response 00:19:42.706 response: 00:19:42.706 { 00:19:42.706 "code": -32603, 00:19:42.706 "message": "Internal error" 00:19:42.706 } 00:19:42.706 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:42.706 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:42.706 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:42.706 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:42.706 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1045813 00:19:42.706 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1045813 ']' 00:19:42.706 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1045813 00:19:42.706 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:42.706 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.706 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1045813 00:19:42.706 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:42.706 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:42.706 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1045813' 00:19:42.706 killing process with pid 1045813 00:19:42.706 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1045813 00:19:42.706 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1045813 00:19:42.967 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.EVxctdrZzx 00:19:42.967 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:42.967 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:42.967 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:42.967 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.967 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1046190 00:19:42.967 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1046190 00:19:42.967 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:42.967 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1046190 ']' 00:19:42.967 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.967 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.967 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.968 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.968 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.968 [2024-10-30 14:06:41.127308] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:19:42.968 [2024-10-30 14:06:41.127364] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.968 [2024-10-30 14:06:41.219645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.968 [2024-10-30 14:06:41.248513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.968 [2024-10-30 14:06:41.248544] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.968 [2024-10-30 14:06:41.248550] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.968 [2024-10-30 14:06:41.248555] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.968 [2024-10-30 14:06:41.248560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.968 [2024-10-30 14:06:41.249033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.910 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:43.910 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:43.910 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:43.910 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:43.910 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.910 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.910 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.EVxctdrZzx 00:19:43.910 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.EVxctdrZzx 00:19:43.910 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:43.910 [2024-10-30 14:06:42.116613] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.910 14:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:44.173 14:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:44.173 [2024-10-30 14:06:42.453448] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:44.173 [2024-10-30 14:06:42.453632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.173 14:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:44.439 malloc0 00:19:44.439 14:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:44.699 14:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.EVxctdrZzx 00:19:44.699 14:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:44.960 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:44.960 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1046560 00:19:44.960 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:44.960 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1046560 /var/tmp/bdevperf.sock 00:19:44.960 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1046560 ']' 00:19:44.960 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:44.960 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.960 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:44.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:44.960 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.960 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.960 [2024-10-30 14:06:43.156315] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:19:44.960 [2024-10-30 14:06:43.156367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1046560 ] 00:19:44.960 [2024-10-30 14:06:43.237751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.220 [2024-10-30 14:06:43.266826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.220 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.220 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:45.220 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EVxctdrZzx 00:19:45.220 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:45.480 [2024-10-30 14:06:43.660135] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:45.480 TLSTESTn1 00:19:45.480 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:45.741 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:45.741 "subsystems": [ 00:19:45.741 { 00:19:45.741 "subsystem": "keyring", 00:19:45.741 "config": [ 00:19:45.741 { 00:19:45.741 "method": "keyring_file_add_key", 00:19:45.741 "params": { 00:19:45.741 "name": "key0", 00:19:45.741 "path": "/tmp/tmp.EVxctdrZzx" 00:19:45.741 } 00:19:45.741 } 00:19:45.741 ] 00:19:45.741 }, 00:19:45.741 { 00:19:45.741 "subsystem": "iobuf", 00:19:45.741 "config": [ 00:19:45.741 { 00:19:45.741 "method": "iobuf_set_options", 00:19:45.741 "params": { 00:19:45.741 "small_pool_count": 8192, 00:19:45.741 "large_pool_count": 1024, 00:19:45.741 "small_bufsize": 8192, 00:19:45.741 "large_bufsize": 135168, 00:19:45.741 "enable_numa": false 00:19:45.741 } 00:19:45.741 } 00:19:45.741 ] 00:19:45.741 }, 00:19:45.741 { 00:19:45.741 "subsystem": "sock", 00:19:45.741 "config": [ 00:19:45.741 { 00:19:45.741 "method": "sock_set_default_impl", 00:19:45.741 "params": { 00:19:45.741 "impl_name": "posix" 00:19:45.741 } 00:19:45.741 }, 00:19:45.741 { 00:19:45.741 "method": "sock_impl_set_options", 00:19:45.741 "params": { 00:19:45.741 "impl_name": "ssl", 00:19:45.741 "recv_buf_size": 4096, 00:19:45.741 "send_buf_size": 4096, 00:19:45.741 "enable_recv_pipe": true, 00:19:45.741 "enable_quickack": false, 00:19:45.741 "enable_placement_id": 0, 00:19:45.741 "enable_zerocopy_send_server": true, 00:19:45.741 "enable_zerocopy_send_client": false, 00:19:45.741 "zerocopy_threshold": 0, 00:19:45.741 "tls_version": 0, 00:19:45.741 "enable_ktls": false 00:19:45.741 } 00:19:45.741 }, 00:19:45.741 { 00:19:45.741 "method": "sock_impl_set_options", 00:19:45.741 "params": { 00:19:45.741 "impl_name": "posix", 00:19:45.741 "recv_buf_size": 2097152, 00:19:45.741 "send_buf_size": 2097152, 00:19:45.741 "enable_recv_pipe": true, 00:19:45.741 "enable_quickack": false, 00:19:45.741 "enable_placement_id": 0, 00:19:45.741 "enable_zerocopy_send_server": true, 00:19:45.741 "enable_zerocopy_send_client": false, 00:19:45.741 "zerocopy_threshold": 0, 00:19:45.741 "tls_version": 0, 00:19:45.741 "enable_ktls": false 00:19:45.741 } 00:19:45.741 } 00:19:45.741 ] 00:19:45.741 }, 00:19:45.741 { 00:19:45.741 "subsystem": "vmd", 00:19:45.741 "config": [] 00:19:45.741 }, 00:19:45.741 { 00:19:45.741 "subsystem": "accel", 00:19:45.741 "config": [ 00:19:45.741 { 00:19:45.741 "method": "accel_set_options", 00:19:45.741 "params": { 00:19:45.741 "small_cache_size": 128, 00:19:45.741 "large_cache_size": 16, 00:19:45.741 "task_count": 2048, 00:19:45.741 "sequence_count": 2048, 00:19:45.741 "buf_count": 2048 00:19:45.741 } 00:19:45.741 } 00:19:45.742 ] 00:19:45.742 }, 00:19:45.742 { 00:19:45.742 "subsystem": "bdev", 00:19:45.742 "config": [ 00:19:45.742 { 00:19:45.742 "method": "bdev_set_options", 00:19:45.742 "params": { 00:19:45.742 "bdev_io_pool_size": 65535, 00:19:45.742 "bdev_io_cache_size": 256, 00:19:45.742 "bdev_auto_examine": true, 00:19:45.742 "iobuf_small_cache_size": 128, 00:19:45.742 "iobuf_large_cache_size": 16 00:19:45.742 } 00:19:45.742 }, 00:19:45.742 { 00:19:45.742 "method": "bdev_raid_set_options", 00:19:45.742 "params": { 00:19:45.742 "process_window_size_kb": 1024, 00:19:45.742 "process_max_bandwidth_mb_sec": 0 00:19:45.742 } 00:19:45.742 }, 00:19:45.742 { 00:19:45.742 "method": "bdev_iscsi_set_options", 00:19:45.742 "params": { 00:19:45.742 "timeout_sec": 30 00:19:45.742 } 00:19:45.742 }, 00:19:45.742 { 00:19:45.742 "method": "bdev_nvme_set_options", 00:19:45.742 "params": { 00:19:45.742 "action_on_timeout": "none", 00:19:45.742 "timeout_us": 0, 00:19:45.742 "timeout_admin_us": 0, 00:19:45.742 "keep_alive_timeout_ms": 10000, 00:19:45.742 "arbitration_burst": 0, 00:19:45.742 "low_priority_weight": 0, 00:19:45.742 "medium_priority_weight": 0, 00:19:45.742 "high_priority_weight": 0, 00:19:45.742 "nvme_adminq_poll_period_us": 10000, 00:19:45.742 "nvme_ioq_poll_period_us": 0, 00:19:45.742 "io_queue_requests": 0, 00:19:45.742 "delay_cmd_submit": true, 00:19:45.742 "transport_retry_count": 4, 00:19:45.742 "bdev_retry_count": 3, 00:19:45.742 "transport_ack_timeout": 0, 00:19:45.742 "ctrlr_loss_timeout_sec": 0, 00:19:45.742 "reconnect_delay_sec": 0, 00:19:45.742 "fast_io_fail_timeout_sec": 0, 00:19:45.742 "disable_auto_failback": false, 00:19:45.742 "generate_uuids": false, 00:19:45.742 "transport_tos": 0, 00:19:45.742 "nvme_error_stat": false, 00:19:45.742 "rdma_srq_size": 0, 00:19:45.742 "io_path_stat": false, 00:19:45.742 "allow_accel_sequence": false, 00:19:45.742 "rdma_max_cq_size": 0, 00:19:45.742 "rdma_cm_event_timeout_ms": 0, 00:19:45.742 "dhchap_digests": [ 00:19:45.742 "sha256", 00:19:45.742 "sha384", 00:19:45.742 "sha512" 00:19:45.742 ], 00:19:45.742 "dhchap_dhgroups": [ 00:19:45.742 "null", 00:19:45.742 "ffdhe2048", 00:19:45.742 "ffdhe3072", 00:19:45.742 "ffdhe4096", 00:19:45.742 "ffdhe6144", 00:19:45.742 "ffdhe8192" 00:19:45.742 ] 00:19:45.742 } 00:19:45.742 }, 00:19:45.742 { 00:19:45.742 "method": "bdev_nvme_set_hotplug", 00:19:45.742 "params": { 00:19:45.742 "period_us": 100000, 00:19:45.742 "enable": false 00:19:45.742 } 00:19:45.742 }, 00:19:45.742 { 00:19:45.742 "method": "bdev_malloc_create", 00:19:45.742 "params": { 00:19:45.742 "name": "malloc0", 00:19:45.742 "num_blocks": 8192, 00:19:45.742 "block_size": 4096, 00:19:45.742 "physical_block_size": 4096, 00:19:45.742 "uuid": "31b9fbd7-3685-4d3b-af37-986988152c86", 00:19:45.742 "optimal_io_boundary": 0, 00:19:45.742 "md_size": 0, 00:19:45.742 "dif_type": 0, 00:19:45.742 "dif_is_head_of_md": false, 00:19:45.742 "dif_pi_format": 0 00:19:45.742 } 00:19:45.742 }, 00:19:45.742 { 00:19:45.742 "method": "bdev_wait_for_examine" 00:19:45.742 } 00:19:45.742 ] 00:19:45.742 }, 00:19:45.742 { 00:19:45.742 "subsystem": "nbd", 00:19:45.742 "config": [] 00:19:45.742 }, 00:19:45.742 { 00:19:45.742 "subsystem": "scheduler", 00:19:45.742 "config": [ 00:19:45.742 { 00:19:45.742 "method": "framework_set_scheduler", 00:19:45.742 "params": { 00:19:45.742 "name": "static" 00:19:45.742 } 00:19:45.742 } 00:19:45.742 ] 00:19:45.742 }, 00:19:45.742 { 00:19:45.742 "subsystem": "nvmf", 00:19:45.742 "config": [ 00:19:45.742 { 00:19:45.742 "method": "nvmf_set_config", 00:19:45.742 "params": { 00:19:45.742 "discovery_filter": "match_any", 00:19:45.742 "admin_cmd_passthru": { 00:19:45.742 "identify_ctrlr": false 00:19:45.742 }, 00:19:45.742 "dhchap_digests": [ 00:19:45.742 "sha256", 00:19:45.742 "sha384", 00:19:45.742 "sha512" 00:19:45.742 ], 00:19:45.742 "dhchap_dhgroups": [ 00:19:45.742 "null", 00:19:45.742 "ffdhe2048", 00:19:45.742 "ffdhe3072", 00:19:45.742 "ffdhe4096", 00:19:45.742 "ffdhe6144", 00:19:45.742 "ffdhe8192" 00:19:45.742 ] 00:19:45.742 } 00:19:45.742 }, 00:19:45.742 { 00:19:45.742 "method": "nvmf_set_max_subsystems", 00:19:45.742 "params": { 00:19:45.742 "max_subsystems": 1024 00:19:45.742 } 00:19:45.742 }, 00:19:45.742 { 00:19:45.742 "method": "nvmf_set_crdt", 00:19:45.742 "params": { 00:19:45.742 "crdt1": 0, 00:19:45.742 "crdt2": 0, 00:19:45.742 "crdt3": 0 00:19:45.742 } 00:19:45.742 }, 00:19:45.742 { 00:19:45.742 "method": "nvmf_create_transport", 00:19:45.742 "params": { 00:19:45.742 "trtype": "TCP", 00:19:45.742 "max_queue_depth": 128, 00:19:45.742 "max_io_qpairs_per_ctrlr": 127, 00:19:45.742 "in_capsule_data_size": 4096, 00:19:45.742 "max_io_size": 131072, 00:19:45.742 "io_unit_size": 131072, 00:19:45.742 "max_aq_depth": 128, 00:19:45.742 "num_shared_buffers": 511, 00:19:45.742 "buf_cache_size": 4294967295, 00:19:45.742 "dif_insert_or_strip": false, 00:19:45.742 "zcopy": false, 00:19:45.742 "c2h_success": false, 00:19:45.742 "sock_priority": 0, 00:19:45.742 "abort_timeout_sec": 1, 00:19:45.742 "ack_timeout": 0, 00:19:45.742 "data_wr_pool_size": 0 00:19:45.742 } 00:19:45.742 }, 00:19:45.742 { 00:19:45.742 "method": "nvmf_create_subsystem", 00:19:45.742 "params": { 00:19:45.742 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.742 "allow_any_host": false, 00:19:45.742 "serial_number": "SPDK00000000000001", 00:19:45.742 "model_number": "SPDK bdev Controller", 00:19:45.742 "max_namespaces": 10, 00:19:45.742 "min_cntlid": 1, 00:19:45.742 "max_cntlid": 65519, 00:19:45.742 "ana_reporting": false 00:19:45.742 } 00:19:45.742 }, 00:19:45.742 { 00:19:45.742 "method": "nvmf_subsystem_add_host", 00:19:45.742 "params": { 00:19:45.742 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.742 "host": "nqn.2016-06.io.spdk:host1", 00:19:45.742 "psk": "key0" 00:19:45.742 } 00:19:45.742 }, 00:19:45.742 { 00:19:45.742 "method": "nvmf_subsystem_add_ns", 00:19:45.742 "params": { 00:19:45.742 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.742 "namespace": { 00:19:45.742 "nsid": 1, 00:19:45.742 "bdev_name": "malloc0", 00:19:45.742 "nguid": "31B9FBD736854D3BAF37986988152C86", 00:19:45.742 "uuid": "31b9fbd7-3685-4d3b-af37-986988152c86", 00:19:45.742 "no_auto_visible": false 00:19:45.742 } 00:19:45.742 } 00:19:45.742 }, 00:19:45.742 { 00:19:45.742 "method": "nvmf_subsystem_add_listener", 00:19:45.742 "params": { 00:19:45.742 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.742 "listen_address": { 00:19:45.742 "trtype": "TCP", 00:19:45.742 "adrfam": "IPv4", 00:19:45.742 "traddr": "10.0.0.2", 00:19:45.742 "trsvcid": "4420" 00:19:45.742 }, 00:19:45.742 "secure_channel": true 00:19:45.742 } 00:19:45.742 } 00:19:45.742 ] 00:19:45.742 } 00:19:45.742 ] 00:19:45.742 }' 00:19:45.742 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:46.004 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:46.004 "subsystems": [ 00:19:46.004 { 00:19:46.004 "subsystem": "keyring", 00:19:46.004 "config": [ 00:19:46.004 { 00:19:46.004 "method": "keyring_file_add_key", 00:19:46.004 "params": { 00:19:46.004 "name": "key0", 00:19:46.004 "path": "/tmp/tmp.EVxctdrZzx" 00:19:46.004 } 00:19:46.004 } 00:19:46.004 ] 00:19:46.004 }, 00:19:46.004 { 00:19:46.004 "subsystem": "iobuf", 00:19:46.004 "config": [ 00:19:46.004 { 00:19:46.004 "method": "iobuf_set_options", 00:19:46.004 "params": { 00:19:46.004 "small_pool_count": 8192, 00:19:46.004 "large_pool_count": 1024, 00:19:46.004 "small_bufsize": 8192, 00:19:46.004 "large_bufsize": 135168, 00:19:46.004 "enable_numa": false 00:19:46.004 } 00:19:46.004 } 00:19:46.004 ] 00:19:46.004 }, 00:19:46.004 { 00:19:46.004 "subsystem": "sock", 00:19:46.004 "config": [ 00:19:46.004 { 00:19:46.004 "method": "sock_set_default_impl", 00:19:46.004 "params": { 00:19:46.004 "impl_name": "posix" 00:19:46.004 } 00:19:46.004 }, 00:19:46.004 { 00:19:46.004 "method": "sock_impl_set_options", 00:19:46.004 "params": { 00:19:46.004 "impl_name": "ssl", 00:19:46.004 "recv_buf_size": 4096, 00:19:46.004 "send_buf_size": 4096, 00:19:46.004 "enable_recv_pipe": true, 00:19:46.004 "enable_quickack": false, 00:19:46.004 "enable_placement_id": 0, 00:19:46.004 "enable_zerocopy_send_server": true, 00:19:46.004 "enable_zerocopy_send_client": false, 00:19:46.004 "zerocopy_threshold": 0, 00:19:46.004 "tls_version": 0, 00:19:46.004 "enable_ktls": false 00:19:46.004 } 00:19:46.004 }, 00:19:46.004 { 00:19:46.004 "method": "sock_impl_set_options", 00:19:46.004 "params": { 00:19:46.004 "impl_name": "posix", 00:19:46.004 "recv_buf_size": 2097152, 00:19:46.004 "send_buf_size": 2097152, 00:19:46.004 "enable_recv_pipe": true, 00:19:46.004 "enable_quickack": false, 00:19:46.004 "enable_placement_id": 0, 00:19:46.004 "enable_zerocopy_send_server": true, 00:19:46.004 "enable_zerocopy_send_client": false, 00:19:46.004 "zerocopy_threshold": 0, 00:19:46.004 "tls_version": 0, 00:19:46.004 "enable_ktls": false 00:19:46.004 } 00:19:46.004 } 00:19:46.004 ] 00:19:46.004 }, 00:19:46.004 { 00:19:46.004 "subsystem": "vmd", 00:19:46.004 "config": [] 00:19:46.004 }, 00:19:46.004 { 00:19:46.004 "subsystem": "accel", 00:19:46.004 "config": [ 00:19:46.004 { 00:19:46.004 "method": "accel_set_options", 00:19:46.004 "params": { 00:19:46.004 "small_cache_size": 128, 00:19:46.004 "large_cache_size": 16, 00:19:46.004 "task_count": 2048, 00:19:46.004 "sequence_count": 2048, 00:19:46.004 "buf_count": 2048 00:19:46.004 } 00:19:46.004 } 00:19:46.004 ] 00:19:46.004 }, 00:19:46.004 { 00:19:46.004 "subsystem": "bdev", 00:19:46.004 "config": [ 00:19:46.004 { 00:19:46.004 "method": "bdev_set_options", 00:19:46.004 "params": { 00:19:46.004 "bdev_io_pool_size": 65535, 00:19:46.004 "bdev_io_cache_size": 256, 00:19:46.004 "bdev_auto_examine": true, 00:19:46.004 "iobuf_small_cache_size": 128, 00:19:46.004 "iobuf_large_cache_size": 16 00:19:46.004 } 00:19:46.004 }, 00:19:46.004 { 00:19:46.004 "method": "bdev_raid_set_options", 00:19:46.004 "params": { 00:19:46.004 "process_window_size_kb": 1024, 00:19:46.004 "process_max_bandwidth_mb_sec": 0 00:19:46.004 } 00:19:46.004 }, 00:19:46.004 { 00:19:46.004 "method": "bdev_iscsi_set_options", 00:19:46.004 "params": { 00:19:46.004 "timeout_sec": 30 00:19:46.004 } 00:19:46.004 }, 00:19:46.004 { 00:19:46.004 "method": "bdev_nvme_set_options", 00:19:46.004 "params": { 00:19:46.004 "action_on_timeout": "none", 00:19:46.004 "timeout_us": 0, 00:19:46.004 "timeout_admin_us": 0, 00:19:46.004 "keep_alive_timeout_ms": 10000, 00:19:46.004 "arbitration_burst": 0, 00:19:46.004 "low_priority_weight": 0, 00:19:46.004 "medium_priority_weight": 0, 00:19:46.004 "high_priority_weight": 0, 00:19:46.004 "nvme_adminq_poll_period_us": 10000, 00:19:46.004 "nvme_ioq_poll_period_us": 0, 00:19:46.004 "io_queue_requests": 512, 00:19:46.004 "delay_cmd_submit": true, 00:19:46.004 "transport_retry_count": 4, 00:19:46.004 "bdev_retry_count": 3, 00:19:46.004 "transport_ack_timeout": 0, 00:19:46.004 "ctrlr_loss_timeout_sec": 0, 00:19:46.004 "reconnect_delay_sec": 0, 00:19:46.004 "fast_io_fail_timeout_sec": 0, 00:19:46.004 "disable_auto_failback": false, 00:19:46.004 "generate_uuids": false, 00:19:46.004 "transport_tos": 0, 00:19:46.004 "nvme_error_stat": false, 00:19:46.004 "rdma_srq_size": 0, 00:19:46.004 "io_path_stat": false, 00:19:46.004 "allow_accel_sequence": false, 00:19:46.004 "rdma_max_cq_size": 0, 00:19:46.004 "rdma_cm_event_timeout_ms": 0, 00:19:46.004 "dhchap_digests": [ 00:19:46.004 "sha256", 00:19:46.004 "sha384", 00:19:46.004 "sha512" 00:19:46.004 ], 00:19:46.004 "dhchap_dhgroups": [ 00:19:46.004 "null", 00:19:46.004 "ffdhe2048", 00:19:46.004 "ffdhe3072", 00:19:46.004 "ffdhe4096", 00:19:46.004 "ffdhe6144", 00:19:46.004 "ffdhe8192" 00:19:46.004 ] 00:19:46.004 } 00:19:46.004 }, 00:19:46.004 { 00:19:46.004 "method": "bdev_nvme_attach_controller", 00:19:46.004 "params": { 00:19:46.004 "name": "TLSTEST", 00:19:46.004 "trtype": "TCP", 00:19:46.004 "adrfam": "IPv4", 00:19:46.004 "traddr": "10.0.0.2", 00:19:46.004 "trsvcid": "4420", 00:19:46.004 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.004 "prchk_reftag": false, 00:19:46.004 "prchk_guard": false, 00:19:46.004 "ctrlr_loss_timeout_sec": 0, 00:19:46.004 "reconnect_delay_sec": 0, 00:19:46.004 "fast_io_fail_timeout_sec": 0, 00:19:46.004 "psk": "key0", 00:19:46.004 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:46.004 "hdgst": false, 00:19:46.004 "ddgst": false, 00:19:46.004 "multipath": "multipath" 00:19:46.004 } 00:19:46.004 }, 00:19:46.004 { 00:19:46.004 "method": "bdev_nvme_set_hotplug", 00:19:46.004 "params": { 00:19:46.004 "period_us": 100000, 00:19:46.004 "enable": false 00:19:46.004 } 00:19:46.004 }, 00:19:46.004 { 00:19:46.004 "method": "bdev_wait_for_examine" 00:19:46.004 } 00:19:46.004 ] 00:19:46.004 }, 00:19:46.004 { 00:19:46.004 "subsystem": "nbd", 00:19:46.004 "config": [] 00:19:46.004 } 00:19:46.004 ] 00:19:46.004 }' 00:19:46.004 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1046560 00:19:46.005 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1046560 ']' 00:19:46.005 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1046560 00:19:46.005 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:46.005 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.005 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1046560 00:19:46.265 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:46.265 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:46.265 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1046560' 00:19:46.265 killing process with pid 1046560 00:19:46.265 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1046560 00:19:46.265 Received shutdown signal, test time was about 10.000000 seconds 00:19:46.265 00:19:46.265 Latency(us) 00:19:46.265 [2024-10-30T13:06:44.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.265 [2024-10-30T13:06:44.564Z] =================================================================================================================== 00:19:46.265 [2024-10-30T13:06:44.564Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:46.265 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1046560 00:19:46.265 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1046190 00:19:46.265 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1046190 ']' 00:19:46.265 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1046190 00:19:46.265 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:46.265 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.265 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1046190 00:19:46.265 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:46.265 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:46.265 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1046190' 00:19:46.266 killing process with pid 1046190 00:19:46.266 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1046190 00:19:46.266 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1046190 00:19:46.528 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:46.528 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:46.528 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:46.528 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.528 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:46.528 "subsystems": [ 00:19:46.528 { 00:19:46.528 "subsystem": "keyring", 00:19:46.528 "config": [ 00:19:46.528 { 00:19:46.528 "method": "keyring_file_add_key", 00:19:46.528 "params": { 00:19:46.528 "name": "key0", 00:19:46.528 "path": "/tmp/tmp.EVxctdrZzx" 00:19:46.528 } 00:19:46.528 } 00:19:46.528 ] 00:19:46.528 }, 00:19:46.528 { 00:19:46.528 "subsystem": "iobuf", 00:19:46.528 "config": [ 00:19:46.528 { 00:19:46.528 "method": "iobuf_set_options", 00:19:46.528 "params": { 00:19:46.528 "small_pool_count": 8192, 00:19:46.528 "large_pool_count": 1024, 00:19:46.528 "small_bufsize": 8192, 00:19:46.528 "large_bufsize": 135168, 00:19:46.528 "enable_numa": false 00:19:46.528 } 00:19:46.528 } 00:19:46.528 ] 00:19:46.528 }, 00:19:46.528 { 00:19:46.528 "subsystem": "sock", 00:19:46.528 "config": [ 00:19:46.528 { 00:19:46.528 "method": "sock_set_default_impl", 00:19:46.528 "params": { 00:19:46.528 "impl_name": "posix" 00:19:46.528 } 00:19:46.528 }, 00:19:46.528 { 00:19:46.528 "method": "sock_impl_set_options", 00:19:46.528 "params": { 00:19:46.528 "impl_name": "ssl", 00:19:46.528 "recv_buf_size": 4096, 00:19:46.528 "send_buf_size": 4096, 00:19:46.528 "enable_recv_pipe": true, 00:19:46.528 "enable_quickack": false, 00:19:46.528 "enable_placement_id": 0, 00:19:46.528 "enable_zerocopy_send_server": true, 00:19:46.528 "enable_zerocopy_send_client": false, 00:19:46.528 "zerocopy_threshold": 0, 00:19:46.528 "tls_version": 0, 00:19:46.528 "enable_ktls": false 00:19:46.528 } 00:19:46.528 }, 00:19:46.528 { 00:19:46.528 "method": "sock_impl_set_options", 00:19:46.528 "params": { 00:19:46.528 "impl_name": "posix", 00:19:46.528 "recv_buf_size": 2097152, 00:19:46.528 "send_buf_size": 2097152, 00:19:46.528 "enable_recv_pipe": true, 00:19:46.528 "enable_quickack": false, 00:19:46.528 "enable_placement_id": 0, 00:19:46.528 "enable_zerocopy_send_server": true, 00:19:46.528 "enable_zerocopy_send_client": false, 00:19:46.528 "zerocopy_threshold": 0, 00:19:46.528 "tls_version": 0, 00:19:46.528 "enable_ktls": false 00:19:46.528 } 00:19:46.528 } 00:19:46.528 ] 00:19:46.528 }, 00:19:46.528 { 00:19:46.528 "subsystem": "vmd", 00:19:46.528 "config": [] 00:19:46.528 }, 00:19:46.528 { 00:19:46.528 "subsystem": "accel", 00:19:46.528 "config": [ 00:19:46.528 { 00:19:46.528 "method": "accel_set_options", 00:19:46.528 "params": { 00:19:46.528 "small_cache_size": 128, 00:19:46.528 "large_cache_size": 16, 00:19:46.528 "task_count": 2048, 00:19:46.528 "sequence_count": 2048, 00:19:46.528 "buf_count": 2048 00:19:46.528 } 00:19:46.528 } 00:19:46.528 ] 00:19:46.528 }, 00:19:46.528 { 00:19:46.528 "subsystem": "bdev", 00:19:46.528 "config": [ 00:19:46.528 { 00:19:46.528 "method": "bdev_set_options", 00:19:46.528 "params": { 00:19:46.528 "bdev_io_pool_size": 65535, 00:19:46.528 "bdev_io_cache_size": 256, 00:19:46.528 "bdev_auto_examine": true, 00:19:46.528 "iobuf_small_cache_size": 128, 00:19:46.528 "iobuf_large_cache_size": 16 00:19:46.528 } 00:19:46.528 }, 00:19:46.528 { 00:19:46.528 "method": "bdev_raid_set_options", 00:19:46.528 "params": { 00:19:46.528 "process_window_size_kb": 1024, 00:19:46.528 "process_max_bandwidth_mb_sec": 0 00:19:46.528 } 00:19:46.528 }, 00:19:46.528 { 00:19:46.528 "method": "bdev_iscsi_set_options", 00:19:46.528 "params": { 00:19:46.528 "timeout_sec": 30 00:19:46.528 } 00:19:46.528 }, 00:19:46.528 { 00:19:46.528 "method": "bdev_nvme_set_options", 00:19:46.528 "params": { 00:19:46.528 "action_on_timeout": "none", 00:19:46.528 "timeout_us": 0, 00:19:46.528 "timeout_admin_us": 0, 00:19:46.528 "keep_alive_timeout_ms": 10000, 00:19:46.528 "arbitration_burst": 0, 00:19:46.528 "low_priority_weight": 0, 00:19:46.528 "medium_priority_weight": 0, 00:19:46.528 "high_priority_weight": 0, 00:19:46.528 "nvme_adminq_poll_period_us": 10000, 00:19:46.528 "nvme_ioq_poll_period_us": 0, 00:19:46.528 "io_queue_requests": 0, 00:19:46.528 "delay_cmd_submit": true, 00:19:46.528 "transport_retry_count": 4, 00:19:46.528 "bdev_retry_count": 3, 00:19:46.528 "transport_ack_timeout": 0, 00:19:46.528 "ctrlr_loss_timeout_sec": 0, 00:19:46.528 "reconnect_delay_sec": 0, 00:19:46.528 "fast_io_fail_timeout_sec": 0, 00:19:46.528 "disable_auto_failback": false, 00:19:46.528 "generate_uuids": false, 00:19:46.528 "transport_tos": 0, 00:19:46.528 "nvme_error_stat": false, 00:19:46.528 "rdma_srq_size": 0, 00:19:46.528 "io_path_stat": false, 00:19:46.528 "allow_accel_sequence": false, 00:19:46.528 "rdma_max_cq_size": 0, 00:19:46.528 "rdma_cm_event_timeout_ms": 0, 00:19:46.528 "dhchap_digests": [ 00:19:46.528 "sha256", 00:19:46.528 "sha384", 00:19:46.528 "sha512" 00:19:46.528 ], 00:19:46.528 "dhchap_dhgroups": [ 00:19:46.528 "null", 00:19:46.528 "ffdhe2048", 00:19:46.528 "ffdhe3072", 00:19:46.528 "ffdhe4096", 00:19:46.528 "ffdhe6144", 00:19:46.528 "ffdhe8192" 00:19:46.528 ] 00:19:46.528 } 00:19:46.528 }, 00:19:46.528 { 00:19:46.528 "method": "bdev_nvme_set_hotplug", 00:19:46.528 "params": { 00:19:46.528 "period_us": 100000, 00:19:46.528 "enable": false 00:19:46.528 } 00:19:46.528 }, 00:19:46.528 { 00:19:46.528 "method": "bdev_malloc_create", 00:19:46.528 "params": { 00:19:46.528 "name": "malloc0", 00:19:46.528 "num_blocks": 8192, 00:19:46.528 "block_size": 4096, 00:19:46.528 "physical_block_size": 4096, 00:19:46.528 "uuid": "31b9fbd7-3685-4d3b-af37-986988152c86", 00:19:46.528 "optimal_io_boundary": 0, 00:19:46.528 "md_size": 0, 00:19:46.528 "dif_type": 0, 00:19:46.528 "dif_is_head_of_md": false, 00:19:46.528 "dif_pi_format": 0 00:19:46.528 } 00:19:46.528 }, 00:19:46.528 { 00:19:46.528 "method": "bdev_wait_for_examine" 00:19:46.528 } 00:19:46.528 ] 00:19:46.528 }, 00:19:46.528 { 00:19:46.528 "subsystem": "nbd", 00:19:46.528 "config": [] 00:19:46.528 }, 00:19:46.528 { 00:19:46.528 "subsystem": "scheduler", 00:19:46.528 "config": [ 00:19:46.528 { 00:19:46.528 "method": "framework_set_scheduler", 00:19:46.528 "params": { 00:19:46.528 "name": "static" 00:19:46.528 } 00:19:46.528 } 00:19:46.528 ] 00:19:46.528 }, 00:19:46.528 { 00:19:46.528 "subsystem": "nvmf", 00:19:46.528 "config": [ 00:19:46.528 { 00:19:46.528 "method": "nvmf_set_config", 00:19:46.528 "params": { 00:19:46.528 "discovery_filter": "match_any", 00:19:46.529 "admin_cmd_passthru": { 00:19:46.529 "identify_ctrlr": false 00:19:46.529 }, 00:19:46.529 "dhchap_digests": [ 00:19:46.529 "sha256", 00:19:46.529 "sha384", 00:19:46.529 "sha512" 00:19:46.529 ], 00:19:46.529 "dhchap_dhgroups": [ 00:19:46.529 "null", 00:19:46.529 "ffdhe2048", 00:19:46.529 "ffdhe3072", 00:19:46.529 "ffdhe4096", 00:19:46.529 "ffdhe6144", 00:19:46.529 "ffdhe8192" 00:19:46.529 ] 00:19:46.529 } 00:19:46.529 }, 00:19:46.529 { 00:19:46.529 "method": "nvmf_set_max_subsystems", 00:19:46.529 "params": { 00:19:46.529 "max_subsystems": 1024 00:19:46.529 } 00:19:46.529 }, 00:19:46.529 { 00:19:46.529 "method": "nvmf_set_crdt", 00:19:46.529 "params": { 00:19:46.529 "crdt1": 0, 00:19:46.529 "crdt2": 0, 00:19:46.529 "crdt3": 0 00:19:46.529 } 00:19:46.529 }, 00:19:46.529 { 00:19:46.529 "method": "nvmf_create_transport", 00:19:46.529 "params": { 00:19:46.529 "trtype": "TCP", 00:19:46.529 "max_queue_depth": 128, 00:19:46.529 "max_io_qpairs_per_ctrlr": 127, 00:19:46.529 "in_capsule_data_size": 4096, 00:19:46.529 "max_io_size": 131072, 00:19:46.529 "io_unit_size": 131072, 00:19:46.529 "max_aq_depth": 128, 00:19:46.529 "num_shared_buffers": 511, 00:19:46.529 "buf_cache_size": 4294967295, 00:19:46.529 "dif_insert_or_strip": false, 00:19:46.529 "zcopy": false, 00:19:46.529 "c2h_success": false, 00:19:46.529 "sock_priority": 0, 00:19:46.529 "abort_timeout_sec": 1, 00:19:46.529 "ack_timeout": 0, 00:19:46.529 "data_wr_pool_size": 0 00:19:46.529 } 00:19:46.529 }, 00:19:46.529 { 00:19:46.529 "method": "nvmf_create_subsystem", 00:19:46.529 "params": { 00:19:46.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.529 "allow_any_host": false, 00:19:46.529 "serial_number": "SPDK00000000000001", 00:19:46.529 "model_number": "SPDK bdev Controller", 00:19:46.529 "max_namespaces": 10, 00:19:46.529 "min_cntlid": 1, 00:19:46.529 "max_cntlid": 65519, 00:19:46.529 "ana_reporting": false 00:19:46.529 } 00:19:46.529 }, 00:19:46.529 { 00:19:46.529 "method": "nvmf_subsystem_add_host", 00:19:46.529 "params": { 00:19:46.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.529 "host": "nqn.2016-06.io.spdk:host1", 00:19:46.529 "psk": "key0" 00:19:46.529 } 00:19:46.529 }, 00:19:46.529 { 00:19:46.529 "method": "nvmf_subsystem_add_ns", 00:19:46.529 "params": { 00:19:46.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.529 "namespace": { 00:19:46.529 "nsid": 1, 00:19:46.529 "bdev_name": "malloc0", 00:19:46.529 "nguid": "31B9FBD736854D3BAF37986988152C86", 00:19:46.529 "uuid": "31b9fbd7-3685-4d3b-af37-986988152c86", 00:19:46.529 "no_auto_visible": false 00:19:46.529 } 00:19:46.529 } 00:19:46.529 }, 00:19:46.529 { 00:19:46.529 "method": "nvmf_subsystem_add_listener", 00:19:46.529 "params": { 00:19:46.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.529 "listen_address": { 00:19:46.529 "trtype": "TCP", 00:19:46.529 "adrfam": "IPv4", 00:19:46.529 "traddr": "10.0.0.2", 00:19:46.529 "trsvcid": "4420" 00:19:46.529 }, 00:19:46.529 "secure_channel": true 00:19:46.529 } 00:19:46.529 } 00:19:46.529 ] 00:19:46.529 } 00:19:46.529 ] 00:19:46.529 }' 00:19:46.529 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:46.529 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1046905 00:19:46.529 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1046905 00:19:46.529 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1046905 ']' 00:19:46.529 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.529 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:46.529 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.529 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:46.529 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.529 [2024-10-30 14:06:44.641975] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:19:46.529 [2024-10-30 14:06:44.642023] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.529 [2024-10-30 14:06:44.695393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.529 [2024-10-30 14:06:44.723544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.529 [2024-10-30 14:06:44.723569] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.529 [2024-10-30 14:06:44.723575] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.529 [2024-10-30 14:06:44.723580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.529 [2024-10-30 14:06:44.723584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.529 [2024-10-30 14:06:44.724044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.790 [2024-10-30 14:06:44.916405] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.790 [2024-10-30 14:06:44.948430] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:46.790 [2024-10-30 14:06:44.948603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.362 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:47.363 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:47.363 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:47.363 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:47.363 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.363 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.363 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1047188 00:19:47.363 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1047188 /var/tmp/bdevperf.sock 00:19:47.363 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1047188 ']' 00:19:47.363 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:47.363 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.363 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.363 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.363 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.363 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.363 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:47.363 "subsystems": [ 00:19:47.363 { 00:19:47.363 "subsystem": "keyring", 00:19:47.363 "config": [ 00:19:47.363 { 00:19:47.363 "method": "keyring_file_add_key", 00:19:47.363 "params": { 00:19:47.363 "name": "key0", 00:19:47.363 "path": "/tmp/tmp.EVxctdrZzx" 00:19:47.363 } 00:19:47.363 } 00:19:47.363 ] 00:19:47.363 }, 00:19:47.363 { 00:19:47.363 "subsystem": "iobuf", 00:19:47.363 "config": [ 00:19:47.363 { 00:19:47.363 "method": "iobuf_set_options", 00:19:47.363 "params": { 00:19:47.363 "small_pool_count": 8192, 00:19:47.363 "large_pool_count": 1024, 00:19:47.363 "small_bufsize": 8192, 00:19:47.363 "large_bufsize": 135168, 00:19:47.363 "enable_numa": false 00:19:47.363 } 00:19:47.363 } 00:19:47.363 ] 00:19:47.363 }, 00:19:47.363 { 00:19:47.363 "subsystem": "sock", 00:19:47.363 "config": [ 00:19:47.363 { 00:19:47.363 "method": "sock_set_default_impl", 00:19:47.363 "params": { 00:19:47.363 "impl_name": "posix" 00:19:47.363 } 00:19:47.363 }, 00:19:47.363 { 00:19:47.363 "method": "sock_impl_set_options", 00:19:47.363 "params": { 00:19:47.363 "impl_name": "ssl", 00:19:47.363 "recv_buf_size": 4096, 00:19:47.363 "send_buf_size": 4096, 00:19:47.363 "enable_recv_pipe": true, 00:19:47.363 "enable_quickack": false, 00:19:47.363 "enable_placement_id": 0, 00:19:47.363 "enable_zerocopy_send_server": true, 00:19:47.363 "enable_zerocopy_send_client": false, 00:19:47.363 "zerocopy_threshold": 0, 00:19:47.363 "tls_version": 0, 00:19:47.363 "enable_ktls": false 00:19:47.363 } 00:19:47.363 }, 00:19:47.363 { 00:19:47.363 "method": "sock_impl_set_options", 00:19:47.363 "params": { 00:19:47.363 "impl_name": "posix", 00:19:47.363 "recv_buf_size": 2097152, 00:19:47.363 "send_buf_size": 2097152, 00:19:47.363 "enable_recv_pipe": true, 00:19:47.363 "enable_quickack": false, 00:19:47.363 "enable_placement_id": 0, 00:19:47.363 "enable_zerocopy_send_server": true, 00:19:47.363 "enable_zerocopy_send_client": false, 00:19:47.363 "zerocopy_threshold": 0, 00:19:47.363 "tls_version": 0, 00:19:47.363 "enable_ktls": false 00:19:47.363 } 00:19:47.363 } 00:19:47.363 ] 00:19:47.363 }, 00:19:47.363 { 00:19:47.363 "subsystem": "vmd", 00:19:47.363 "config": [] 00:19:47.363 }, 00:19:47.363 { 00:19:47.363 "subsystem": "accel", 00:19:47.363 "config": [ 00:19:47.363 { 00:19:47.363 "method": "accel_set_options", 00:19:47.363 "params": { 00:19:47.363 "small_cache_size": 128, 00:19:47.363 "large_cache_size": 16, 00:19:47.363 "task_count": 2048, 00:19:47.363 "sequence_count": 2048, 00:19:47.363 "buf_count": 2048 00:19:47.363 } 00:19:47.363 } 00:19:47.363 ] 00:19:47.363 }, 00:19:47.363 { 00:19:47.363 "subsystem": "bdev", 00:19:47.363 "config": [ 00:19:47.363 { 00:19:47.363 "method": "bdev_set_options", 00:19:47.363 "params": { 00:19:47.363 "bdev_io_pool_size": 65535, 00:19:47.363 "bdev_io_cache_size": 256, 00:19:47.363 "bdev_auto_examine": true, 00:19:47.363 "iobuf_small_cache_size": 128, 00:19:47.363 "iobuf_large_cache_size": 16 00:19:47.363 } 00:19:47.363 }, 00:19:47.363 { 00:19:47.363 "method": "bdev_raid_set_options", 00:19:47.363 "params": { 00:19:47.363 "process_window_size_kb": 1024, 00:19:47.363 "process_max_bandwidth_mb_sec": 0 00:19:47.363 } 00:19:47.363 }, 00:19:47.363 { 00:19:47.363 "method": "bdev_iscsi_set_options", 00:19:47.363 "params": { 00:19:47.363 "timeout_sec": 30 00:19:47.363 } 00:19:47.363 }, 00:19:47.363 { 00:19:47.363 "method": "bdev_nvme_set_options", 00:19:47.363 "params": { 00:19:47.363 "action_on_timeout": "none", 00:19:47.363 "timeout_us": 0, 00:19:47.363 "timeout_admin_us": 0, 00:19:47.363 "keep_alive_timeout_ms": 10000, 00:19:47.363 "arbitration_burst": 0, 00:19:47.363 "low_priority_weight": 0, 00:19:47.363 "medium_priority_weight": 0, 00:19:47.363 "high_priority_weight": 0, 00:19:47.363 "nvme_adminq_poll_period_us": 10000, 00:19:47.363 "nvme_ioq_poll_period_us": 0, 00:19:47.363 "io_queue_requests": 512, 00:19:47.363 "delay_cmd_submit": true, 00:19:47.363 "transport_retry_count": 4, 00:19:47.363 "bdev_retry_count": 3, 00:19:47.363 "transport_ack_timeout": 0, 00:19:47.363 "ctrlr_loss_timeout_sec": 0, 00:19:47.363 "reconnect_delay_sec": 0, 00:19:47.363 "fast_io_fail_timeout_sec": 0, 00:19:47.363 "disable_auto_failback": false, 00:19:47.363 "generate_uuids": false, 00:19:47.363 "transport_tos": 0, 00:19:47.363 "nvme_error_stat": false, 00:19:47.363 "rdma_srq_size": 0, 00:19:47.363 "io_path_stat": false, 00:19:47.363 "allow_accel_sequence": false, 00:19:47.363 "rdma_max_cq_size": 0, 00:19:47.363 "rdma_cm_event_timeout_ms": 0, 00:19:47.363 "dhchap_digests": [ 00:19:47.363 "sha256", 00:19:47.363 "sha384", 00:19:47.363 "sha512" 00:19:47.363 ], 00:19:47.363 "dhchap_dhgroups": [ 00:19:47.363 "null", 00:19:47.363 "ffdhe2048", 00:19:47.363 "ffdhe3072", 00:19:47.363 "ffdhe4096", 00:19:47.363 "ffdhe6144", 00:19:47.363 "ffdhe8192" 00:19:47.363 ] 00:19:47.363 } 00:19:47.363 }, 00:19:47.363 { 00:19:47.363 "method": "bdev_nvme_attach_controller", 00:19:47.363 "params": { 00:19:47.363 "name": "TLSTEST", 00:19:47.363 "trtype": "TCP", 00:19:47.363 "adrfam": "IPv4", 00:19:47.363 "traddr": "10.0.0.2", 00:19:47.363 "trsvcid": "4420", 00:19:47.363 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.363 "prchk_reftag": false, 00:19:47.363 "prchk_guard": false, 00:19:47.363 "ctrlr_loss_timeout_sec": 0, 00:19:47.363 "reconnect_delay_sec": 0, 00:19:47.363 "fast_io_fail_timeout_sec": 0, 00:19:47.363 "psk": "key0", 00:19:47.363 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:47.363 "hdgst": false, 00:19:47.363 "ddgst": false, 00:19:47.363 "multipath": "multipath" 00:19:47.363 } 00:19:47.363 }, 00:19:47.363 { 00:19:47.363 "method": "bdev_nvme_set_hotplug", 00:19:47.363 "params": { 00:19:47.363 "period_us": 100000, 00:19:47.363 "enable": false 00:19:47.363 } 00:19:47.363 }, 00:19:47.363 { 00:19:47.363 "method": "bdev_wait_for_examine" 00:19:47.363 } 00:19:47.363 ] 00:19:47.363 }, 00:19:47.363 { 00:19:47.363 "subsystem": "nbd", 00:19:47.363 "config": [] 00:19:47.363 } 00:19:47.363 ] 00:19:47.363 }' 00:19:47.363 [2024-10-30 14:06:45.537500] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:19:47.363 [2024-10-30 14:06:45.537540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047188 ] 00:19:47.363 [2024-10-30 14:06:45.586991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.364 [2024-10-30 14:06:45.615780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.624 [2024-10-30 14:06:45.749401] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:48.196 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.196 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:48.196 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:48.196 Running I/O for 10 seconds... 00:19:50.524 5521.00 IOPS, 21.57 MiB/s [2024-10-30T13:06:49.769Z] 5416.00 IOPS, 21.16 MiB/s [2024-10-30T13:06:50.710Z] 5330.00 IOPS, 20.82 MiB/s [2024-10-30T13:06:51.651Z] 5160.50 IOPS, 20.16 MiB/s [2024-10-30T13:06:52.702Z] 5431.00 IOPS, 21.21 MiB/s [2024-10-30T13:06:53.696Z] 5542.67 IOPS, 21.65 MiB/s [2024-10-30T13:06:54.639Z] 5451.43 IOPS, 21.29 MiB/s [2024-10-30T13:06:55.581Z] 5445.88 IOPS, 21.27 MiB/s [2024-10-30T13:06:56.523Z] 5561.89 IOPS, 21.73 MiB/s [2024-10-30T13:06:56.523Z] 5538.70 IOPS, 21.64 MiB/s 00:19:58.224 Latency(us) 00:19:58.224 [2024-10-30T13:06:56.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.224 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:58.224 Verification LBA range: start 0x0 length 0x2000 00:19:58.224 TLSTESTn1 : 10.01 5544.23 21.66 0.00 0.00 23056.93 4724.05 36044.80 00:19:58.224 [2024-10-30T13:06:56.523Z] =================================================================================================================== 00:19:58.224 [2024-10-30T13:06:56.523Z] Total : 5544.23 21.66 0.00 0.00 23056.93 4724.05 36044.80 00:19:58.224 { 00:19:58.224 "results": [ 00:19:58.224 { 00:19:58.224 "job": "TLSTESTn1", 00:19:58.224 "core_mask": "0x4", 00:19:58.224 "workload": "verify", 00:19:58.224 "status": "finished", 00:19:58.224 "verify_range": { 00:19:58.224 "start": 0, 00:19:58.224 "length": 8192 00:19:58.224 }, 00:19:58.224 "queue_depth": 128, 00:19:58.224 "io_size": 4096, 00:19:58.224 "runtime": 10.012932, 00:19:58.224 "iops": 5544.230201503416, 00:19:58.224 "mibps": 21.657149224622717, 00:19:58.224 "io_failed": 0, 00:19:58.224 "io_timeout": 0, 00:19:58.224 "avg_latency_us": 23056.92856912971, 00:19:58.224 "min_latency_us": 4724.053333333333, 00:19:58.224 "max_latency_us": 36044.8 00:19:58.224 } 00:19:58.224 ], 00:19:58.224 "core_count": 1 00:19:58.224 } 00:19:58.224 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:58.224 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1047188 00:19:58.224 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1047188 ']' 00:19:58.224 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1047188 00:19:58.224 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:58.224 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:58.224 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1047188 00:19:58.485 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:58.485 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:58.485 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1047188' 00:19:58.485 killing process with pid 1047188 00:19:58.485 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1047188 00:19:58.485 Received shutdown signal, test time was about 10.000000 seconds 00:19:58.485 00:19:58.485 Latency(us) 00:19:58.485 [2024-10-30T13:06:56.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.485 [2024-10-30T13:06:56.784Z] =================================================================================================================== 00:19:58.485 [2024-10-30T13:06:56.784Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:58.485 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1047188 00:19:58.485 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1046905 00:19:58.485 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1046905 ']' 00:19:58.485 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1046905 00:19:58.485 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:58.485 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:58.485 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1046905 00:19:58.485 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:58.485 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:58.485 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1046905' 00:19:58.485 killing process with pid 1046905 00:19:58.485 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1046905 00:19:58.485 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1046905 00:19:58.746 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:58.746 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:58.746 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:58.746 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.747 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1049282 00:19:58.747 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1049282 00:19:58.747 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:58.747 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1049282 ']' 00:19:58.747 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.747 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.747 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.747 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.747 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.747 [2024-10-30 14:06:56.853192] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:19:58.747 [2024-10-30 14:06:56.853245] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.747 [2024-10-30 14:06:56.948332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.747 [2024-10-30 14:06:56.988084] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.747 [2024-10-30 14:06:56.988133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.747 [2024-10-30 14:06:56.988142] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.747 [2024-10-30 14:06:56.988149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.747 [2024-10-30 14:06:56.988155] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.747 [2024-10-30 14:06:56.988852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.688 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.688 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:59.688 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:59.688 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:59.688 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.688 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.688 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.EVxctdrZzx 00:19:59.688 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.EVxctdrZzx 00:19:59.688 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:59.688 [2024-10-30 14:06:57.860896] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.688 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:59.950 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:59.950 [2024-10-30 14:06:58.237869] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:59.950 [2024-10-30 14:06:58.238179] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.211 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:00.211 malloc0 00:20:00.211 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:00.472 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.EVxctdrZzx 00:20:00.733 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:00.994 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1049719 00:20:00.994 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:00.994 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:00.994 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1049719 /var/tmp/bdevperf.sock 00:20:00.994 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1049719 ']' 00:20:00.994 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.994 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.994 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.994 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.994 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.994 [2024-10-30 14:06:59.104597] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:20:00.994 [2024-10-30 14:06:59.104669] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1049719 ] 00:20:00.994 [2024-10-30 14:06:59.192750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.994 [2024-10-30 14:06:59.227459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.935 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.935 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:01.935 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EVxctdrZzx 00:20:01.935 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:02.197 [2024-10-30 14:07:00.253622] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.197 nvme0n1 00:20:02.197 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:02.197 Running I/O for 1 seconds... 00:20:03.401 4563.00 IOPS, 17.82 MiB/s 00:20:03.401 Latency(us) 00:20:03.401 [2024-10-30T13:07:01.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.401 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:03.401 Verification LBA range: start 0x0 length 0x2000 00:20:03.401 nvme0n1 : 1.02 4611.32 18.01 0.00 0.00 27575.02 4505.60 29491.20 00:20:03.401 [2024-10-30T13:07:01.700Z] =================================================================================================================== 00:20:03.401 [2024-10-30T13:07:01.700Z] Total : 4611.32 18.01 0.00 0.00 27575.02 4505.60 29491.20 00:20:03.401 { 00:20:03.401 "results": [ 00:20:03.401 { 00:20:03.401 "job": "nvme0n1", 00:20:03.401 "core_mask": "0x2", 00:20:03.401 "workload": "verify", 00:20:03.401 "status": "finished", 00:20:03.401 "verify_range": { 00:20:03.401 "start": 0, 00:20:03.401 "length": 8192 00:20:03.401 }, 00:20:03.401 "queue_depth": 128, 00:20:03.401 "io_size": 4096, 00:20:03.401 "runtime": 1.017496, 00:20:03.401 "iops": 4611.320339342858, 00:20:03.401 "mibps": 18.012970075558037, 00:20:03.401 "io_failed": 0, 00:20:03.401 "io_timeout": 0, 00:20:03.401 "avg_latency_us": 27575.01926683717, 00:20:03.401 "min_latency_us": 4505.6, 00:20:03.401 "max_latency_us": 29491.2 00:20:03.401 } 00:20:03.401 ], 00:20:03.401 "core_count": 1 00:20:03.401 } 00:20:03.401 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1049719 00:20:03.401 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1049719 ']' 00:20:03.402 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1049719 00:20:03.402 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:03.402 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.402 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1049719 00:20:03.402 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:03.402 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:03.402 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1049719' 00:20:03.402 killing process with pid 1049719 00:20:03.402 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1049719 00:20:03.402 Received shutdown signal, test time was about 1.000000 seconds 00:20:03.402 00:20:03.402 Latency(us) 00:20:03.402 [2024-10-30T13:07:01.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.402 [2024-10-30T13:07:01.701Z] =================================================================================================================== 00:20:03.402 [2024-10-30T13:07:01.701Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.402 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1049719 00:20:03.402 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1049282 00:20:03.402 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1049282 ']' 00:20:03.402 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1049282 00:20:03.402 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:03.402 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.402 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1049282 00:20:03.663 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:03.663 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:03.663 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1049282' 00:20:03.663 killing process with pid 1049282 00:20:03.663 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1049282 00:20:03.663 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1049282 00:20:03.663 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:03.663 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:03.663 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:03.663 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.663 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1050331 00:20:03.663 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:03.663 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1050331 00:20:03.663 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1050331 ']' 00:20:03.663 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.663 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:03.663 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.663 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:03.663 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.663 [2024-10-30 14:07:01.904061] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:20:03.663 [2024-10-30 14:07:01.904116] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.926 [2024-10-30 14:07:01.997033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.926 [2024-10-30 14:07:02.030761] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.926 [2024-10-30 14:07:02.030797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.926 [2024-10-30 14:07:02.030806] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.926 [2024-10-30 14:07:02.030812] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.926 [2024-10-30 14:07:02.030818] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.926 [2024-10-30 14:07:02.031387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.499 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.499 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:04.499 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:04.499 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:04.499 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.499 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.499 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:04.499 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.499 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.499 [2024-10-30 14:07:02.777232] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.499 malloc0 00:20:04.761 [2024-10-30 14:07:02.807453] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:04.761 [2024-10-30 14:07:02.807761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.761 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.761 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1050550 00:20:04.761 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1050550 /var/tmp/bdevperf.sock 00:20:04.761 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:04.761 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1050550 ']' 00:20:04.761 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.761 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.761 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.761 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.761 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.761 [2024-10-30 14:07:02.891280] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:20:04.761 [2024-10-30 14:07:02.891348] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1050550 ] 00:20:04.761 [2024-10-30 14:07:02.977168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.761 [2024-10-30 14:07:03.011079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.705 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:05.705 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:05.705 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EVxctdrZzx 00:20:05.705 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:05.966 [2024-10-30 14:07:04.028879] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:05.966 nvme0n1 00:20:05.966 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:05.966 Running I/O for 1 seconds... 00:20:07.352 5343.00 IOPS, 20.87 MiB/s 00:20:07.352 Latency(us) 00:20:07.352 [2024-10-30T13:07:05.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.353 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:07.353 Verification LBA range: start 0x0 length 0x2000 00:20:07.353 nvme0n1 : 1.02 5378.05 21.01 0.00 0.00 23652.78 4450.99 36918.61 00:20:07.353 [2024-10-30T13:07:05.652Z] =================================================================================================================== 00:20:07.353 [2024-10-30T13:07:05.652Z] Total : 5378.05 21.01 0.00 0.00 23652.78 4450.99 36918.61 00:20:07.353 { 00:20:07.353 "results": [ 00:20:07.353 { 00:20:07.353 "job": "nvme0n1", 00:20:07.353 "core_mask": "0x2", 00:20:07.353 "workload": "verify", 00:20:07.353 "status": "finished", 00:20:07.353 "verify_range": { 00:20:07.353 "start": 0, 00:20:07.353 "length": 8192 00:20:07.353 }, 00:20:07.353 "queue_depth": 128, 00:20:07.353 "io_size": 4096, 00:20:07.353 "runtime": 1.01747, 00:20:07.353 "iops": 5378.045544340373, 00:20:07.353 "mibps": 21.007990407579584, 00:20:07.353 "io_failed": 0, 00:20:07.353 "io_timeout": 0, 00:20:07.353 "avg_latency_us": 23652.778167641325, 00:20:07.353 "min_latency_us": 4450.986666666667, 00:20:07.353 "max_latency_us": 36918.613333333335 00:20:07.353 } 00:20:07.353 ], 00:20:07.353 "core_count": 1 00:20:07.353 } 00:20:07.353 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:07.353 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.353 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.353 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.353 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:07.353 "subsystems": [ 00:20:07.353 { 00:20:07.353 "subsystem": "keyring", 00:20:07.353 "config": [ 00:20:07.353 { 00:20:07.353 "method": "keyring_file_add_key", 00:20:07.353 "params": { 00:20:07.353 "name": "key0", 00:20:07.353 "path": "/tmp/tmp.EVxctdrZzx" 00:20:07.353 } 00:20:07.353 } 00:20:07.353 ] 00:20:07.353 }, 00:20:07.353 { 00:20:07.353 "subsystem": "iobuf", 00:20:07.353 "config": [ 00:20:07.353 { 00:20:07.353 "method": "iobuf_set_options", 00:20:07.353 "params": { 00:20:07.353 "small_pool_count": 8192, 00:20:07.353 "large_pool_count": 1024, 00:20:07.353 "small_bufsize": 8192, 00:20:07.353 "large_bufsize": 135168, 00:20:07.353 "enable_numa": false 00:20:07.353 } 00:20:07.353 } 00:20:07.353 ] 00:20:07.353 }, 00:20:07.353 { 00:20:07.353 "subsystem": "sock", 00:20:07.353 "config": [ 00:20:07.353 { 00:20:07.353 "method": "sock_set_default_impl", 00:20:07.353 "params": { 00:20:07.353 "impl_name": "posix" 00:20:07.353 } 00:20:07.353 }, 00:20:07.353 { 00:20:07.353 "method": "sock_impl_set_options", 00:20:07.353 "params": { 00:20:07.353 "impl_name": "ssl", 00:20:07.353 "recv_buf_size": 4096, 00:20:07.353 "send_buf_size": 4096, 00:20:07.353 "enable_recv_pipe": true, 00:20:07.353 "enable_quickack": false, 00:20:07.353 "enable_placement_id": 0, 00:20:07.353 "enable_zerocopy_send_server": true, 00:20:07.353 "enable_zerocopy_send_client": false, 00:20:07.353 "zerocopy_threshold": 0, 00:20:07.353 "tls_version": 0, 00:20:07.353 "enable_ktls": false 00:20:07.353 } 00:20:07.353 }, 00:20:07.353 { 00:20:07.353 "method": "sock_impl_set_options", 00:20:07.353 "params": { 00:20:07.353 "impl_name": "posix", 00:20:07.353 "recv_buf_size": 2097152, 00:20:07.353 "send_buf_size": 2097152, 00:20:07.353 "enable_recv_pipe": true, 00:20:07.353 "enable_quickack": false, 00:20:07.353 "enable_placement_id": 0, 00:20:07.353 "enable_zerocopy_send_server": true, 00:20:07.353 "enable_zerocopy_send_client": false, 00:20:07.353 "zerocopy_threshold": 0, 00:20:07.353 "tls_version": 0, 00:20:07.353 "enable_ktls": false 00:20:07.353 } 00:20:07.353 } 00:20:07.353 ] 00:20:07.353 }, 00:20:07.353 { 00:20:07.353 "subsystem": "vmd", 00:20:07.353 "config": [] 00:20:07.353 }, 00:20:07.353 { 00:20:07.353 "subsystem": "accel", 00:20:07.353 "config": [ 00:20:07.353 { 00:20:07.353 "method": "accel_set_options", 00:20:07.353 "params": { 00:20:07.353 "small_cache_size": 128, 00:20:07.353 "large_cache_size": 16, 00:20:07.353 "task_count": 2048, 00:20:07.353 "sequence_count": 2048, 00:20:07.353 "buf_count": 2048 00:20:07.353 } 00:20:07.353 } 00:20:07.353 ] 00:20:07.353 }, 00:20:07.353 { 00:20:07.353 "subsystem": "bdev", 00:20:07.353 "config": [ 00:20:07.353 { 00:20:07.353 "method": "bdev_set_options", 00:20:07.353 "params": { 00:20:07.353 "bdev_io_pool_size": 65535, 00:20:07.353 "bdev_io_cache_size": 256, 00:20:07.353 "bdev_auto_examine": true, 00:20:07.353 "iobuf_small_cache_size": 128, 00:20:07.353 "iobuf_large_cache_size": 16 00:20:07.353 } 00:20:07.353 }, 00:20:07.353 { 00:20:07.353 "method": "bdev_raid_set_options", 00:20:07.353 "params": { 00:20:07.353 "process_window_size_kb": 1024, 00:20:07.353 "process_max_bandwidth_mb_sec": 0 00:20:07.353 } 00:20:07.353 }, 00:20:07.353 { 00:20:07.353 "method": "bdev_iscsi_set_options", 00:20:07.353 "params": { 00:20:07.353 "timeout_sec": 30 00:20:07.353 } 00:20:07.353 }, 00:20:07.353 { 00:20:07.353 "method": "bdev_nvme_set_options", 00:20:07.353 "params": { 00:20:07.353 "action_on_timeout": "none", 00:20:07.353 "timeout_us": 0, 00:20:07.353 "timeout_admin_us": 0, 00:20:07.353 "keep_alive_timeout_ms": 10000, 00:20:07.353 "arbitration_burst": 0, 00:20:07.353 "low_priority_weight": 0, 00:20:07.353 "medium_priority_weight": 0, 00:20:07.353 "high_priority_weight": 0, 00:20:07.353 "nvme_adminq_poll_period_us": 10000, 00:20:07.353 "nvme_ioq_poll_period_us": 0, 00:20:07.353 "io_queue_requests": 0, 00:20:07.353 "delay_cmd_submit": true, 00:20:07.353 "transport_retry_count": 4, 00:20:07.353 "bdev_retry_count": 3, 00:20:07.353 "transport_ack_timeout": 0, 00:20:07.353 "ctrlr_loss_timeout_sec": 0, 00:20:07.353 "reconnect_delay_sec": 0, 00:20:07.353 "fast_io_fail_timeout_sec": 0, 00:20:07.353 "disable_auto_failback": false, 00:20:07.353 "generate_uuids": false, 00:20:07.353 "transport_tos": 0, 00:20:07.353 "nvme_error_stat": false, 00:20:07.353 "rdma_srq_size": 0, 00:20:07.353 "io_path_stat": false, 00:20:07.353 "allow_accel_sequence": false, 00:20:07.353 "rdma_max_cq_size": 0, 00:20:07.353 "rdma_cm_event_timeout_ms": 0, 00:20:07.353 "dhchap_digests": [ 00:20:07.353 "sha256", 00:20:07.353 "sha384", 00:20:07.353 "sha512" 00:20:07.353 ], 00:20:07.353 "dhchap_dhgroups": [ 00:20:07.353 "null", 00:20:07.353 "ffdhe2048", 00:20:07.353 "ffdhe3072", 00:20:07.353 "ffdhe4096", 00:20:07.353 "ffdhe6144", 00:20:07.353 "ffdhe8192" 00:20:07.353 ] 00:20:07.353 } 00:20:07.353 }, 00:20:07.353 { 00:20:07.353 "method": "bdev_nvme_set_hotplug", 00:20:07.353 "params": { 00:20:07.353 "period_us": 100000, 00:20:07.353 "enable": false 00:20:07.353 } 00:20:07.353 }, 00:20:07.353 { 00:20:07.353 "method": "bdev_malloc_create", 00:20:07.353 "params": { 00:20:07.353 "name": "malloc0", 00:20:07.353 "num_blocks": 8192, 00:20:07.353 "block_size": 4096, 00:20:07.353 "physical_block_size": 4096, 00:20:07.353 "uuid": "0c2ec783-f7d0-4b5e-8e80-0f4b2e47a870", 00:20:07.353 "optimal_io_boundary": 0, 00:20:07.353 "md_size": 0, 00:20:07.353 "dif_type": 0, 00:20:07.353 "dif_is_head_of_md": false, 00:20:07.353 "dif_pi_format": 0 00:20:07.353 } 00:20:07.353 }, 00:20:07.353 { 00:20:07.353 "method": "bdev_wait_for_examine" 00:20:07.353 } 00:20:07.353 ] 00:20:07.353 }, 00:20:07.353 { 00:20:07.353 "subsystem": "nbd", 00:20:07.353 "config": [] 00:20:07.353 }, 00:20:07.353 { 00:20:07.353 "subsystem": "scheduler", 00:20:07.353 "config": [ 00:20:07.353 { 00:20:07.353 "method": "framework_set_scheduler", 00:20:07.353 "params": { 00:20:07.353 "name": "static" 00:20:07.353 } 00:20:07.353 } 00:20:07.353 ] 00:20:07.353 }, 00:20:07.353 { 00:20:07.353 "subsystem": "nvmf", 00:20:07.353 "config": [ 00:20:07.353 { 00:20:07.353 "method": "nvmf_set_config", 00:20:07.353 "params": { 00:20:07.353 "discovery_filter": "match_any", 00:20:07.353 "admin_cmd_passthru": { 00:20:07.353 "identify_ctrlr": false 00:20:07.353 }, 00:20:07.353 "dhchap_digests": [ 00:20:07.353 "sha256", 00:20:07.353 "sha384", 00:20:07.353 "sha512" 00:20:07.353 ], 00:20:07.353 "dhchap_dhgroups": [ 00:20:07.353 "null", 00:20:07.353 "ffdhe2048", 00:20:07.353 "ffdhe3072", 00:20:07.353 "ffdhe4096", 00:20:07.353 "ffdhe6144", 00:20:07.353 "ffdhe8192" 00:20:07.353 ] 00:20:07.353 } 00:20:07.353 }, 00:20:07.353 { 00:20:07.353 "method": "nvmf_set_max_subsystems", 00:20:07.353 "params": { 00:20:07.353 "max_subsystems": 1024 00:20:07.353 } 00:20:07.353 }, 00:20:07.353 { 00:20:07.353 "method": "nvmf_set_crdt", 00:20:07.353 "params": { 00:20:07.353 "crdt1": 0, 00:20:07.353 "crdt2": 0, 00:20:07.353 "crdt3": 0 00:20:07.353 } 00:20:07.353 }, 00:20:07.353 { 00:20:07.354 "method": "nvmf_create_transport", 00:20:07.354 "params": { 00:20:07.354 "trtype": "TCP", 00:20:07.354 "max_queue_depth": 128, 00:20:07.354 "max_io_qpairs_per_ctrlr": 127, 00:20:07.354 "in_capsule_data_size": 4096, 00:20:07.354 "max_io_size": 131072, 00:20:07.354 "io_unit_size": 131072, 00:20:07.354 "max_aq_depth": 128, 00:20:07.354 "num_shared_buffers": 511, 00:20:07.354 "buf_cache_size": 4294967295, 00:20:07.354 "dif_insert_or_strip": false, 00:20:07.354 "zcopy": false, 00:20:07.354 "c2h_success": false, 00:20:07.354 "sock_priority": 0, 00:20:07.354 "abort_timeout_sec": 1, 00:20:07.354 "ack_timeout": 0, 00:20:07.354 "data_wr_pool_size": 0 00:20:07.354 } 00:20:07.354 }, 00:20:07.354 { 00:20:07.354 "method": "nvmf_create_subsystem", 00:20:07.354 "params": { 00:20:07.354 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.354 "allow_any_host": false, 00:20:07.354 "serial_number": "00000000000000000000", 00:20:07.354 "model_number": "SPDK bdev Controller", 00:20:07.354 "max_namespaces": 32, 00:20:07.354 "min_cntlid": 1, 00:20:07.354 "max_cntlid": 65519, 00:20:07.354 "ana_reporting": false 00:20:07.354 } 00:20:07.354 }, 00:20:07.354 { 00:20:07.354 "method": "nvmf_subsystem_add_host", 00:20:07.354 "params": { 00:20:07.354 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.354 "host": "nqn.2016-06.io.spdk:host1", 00:20:07.354 "psk": "key0" 00:20:07.354 } 00:20:07.354 }, 00:20:07.354 { 00:20:07.354 "method": "nvmf_subsystem_add_ns", 00:20:07.354 "params": { 00:20:07.354 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.354 "namespace": { 00:20:07.354 "nsid": 1, 00:20:07.354 "bdev_name": "malloc0", 00:20:07.354 "nguid": "0C2EC783F7D04B5E8E800F4B2E47A870", 00:20:07.354 "uuid": "0c2ec783-f7d0-4b5e-8e80-0f4b2e47a870", 00:20:07.354 "no_auto_visible": false 00:20:07.354 } 00:20:07.354 } 00:20:07.354 }, 00:20:07.354 { 00:20:07.354 "method": "nvmf_subsystem_add_listener", 00:20:07.354 "params": { 00:20:07.354 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.354 "listen_address": { 00:20:07.354 "trtype": "TCP", 00:20:07.354 "adrfam": "IPv4", 00:20:07.354 "traddr": "10.0.0.2", 00:20:07.354 "trsvcid": "4420" 00:20:07.354 }, 00:20:07.354 "secure_channel": false, 00:20:07.354 "sock_impl": "ssl" 00:20:07.354 } 00:20:07.354 } 00:20:07.354 ] 00:20:07.354 } 00:20:07.354 ] 00:20:07.354 }' 00:20:07.354 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:07.354 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:07.354 "subsystems": [ 00:20:07.354 { 00:20:07.354 "subsystem": "keyring", 00:20:07.354 "config": [ 00:20:07.354 { 00:20:07.354 "method": "keyring_file_add_key", 00:20:07.354 "params": { 00:20:07.354 "name": "key0", 00:20:07.354 "path": "/tmp/tmp.EVxctdrZzx" 00:20:07.354 } 00:20:07.354 } 00:20:07.354 ] 00:20:07.354 }, 00:20:07.354 { 00:20:07.354 "subsystem": "iobuf", 00:20:07.354 "config": [ 00:20:07.354 { 00:20:07.354 "method": "iobuf_set_options", 00:20:07.354 "params": { 00:20:07.354 "small_pool_count": 8192, 00:20:07.354 "large_pool_count": 1024, 00:20:07.354 "small_bufsize": 8192, 00:20:07.354 "large_bufsize": 135168, 00:20:07.354 "enable_numa": false 00:20:07.354 } 00:20:07.354 } 00:20:07.354 ] 00:20:07.354 }, 00:20:07.354 { 00:20:07.354 "subsystem": "sock", 00:20:07.354 "config": [ 00:20:07.354 { 00:20:07.354 "method": "sock_set_default_impl", 00:20:07.354 "params": { 00:20:07.354 "impl_name": "posix" 00:20:07.354 } 00:20:07.354 }, 00:20:07.354 { 00:20:07.354 "method": "sock_impl_set_options", 00:20:07.354 "params": { 00:20:07.354 "impl_name": "ssl", 00:20:07.354 "recv_buf_size": 4096, 00:20:07.354 "send_buf_size": 4096, 00:20:07.354 "enable_recv_pipe": true, 00:20:07.354 "enable_quickack": false, 00:20:07.354 "enable_placement_id": 0, 00:20:07.354 "enable_zerocopy_send_server": true, 00:20:07.354 "enable_zerocopy_send_client": false, 00:20:07.354 "zerocopy_threshold": 0, 00:20:07.354 "tls_version": 0, 00:20:07.354 "enable_ktls": false 00:20:07.354 } 00:20:07.354 }, 00:20:07.354 { 00:20:07.354 "method": "sock_impl_set_options", 00:20:07.354 "params": { 00:20:07.354 "impl_name": "posix", 00:20:07.354 "recv_buf_size": 2097152, 00:20:07.354 "send_buf_size": 2097152, 00:20:07.354 "enable_recv_pipe": true, 00:20:07.354 "enable_quickack": false, 00:20:07.354 "enable_placement_id": 0, 00:20:07.354 "enable_zerocopy_send_server": true, 00:20:07.354 "enable_zerocopy_send_client": false, 00:20:07.354 "zerocopy_threshold": 0, 00:20:07.354 "tls_version": 0, 00:20:07.354 "enable_ktls": false 00:20:07.354 } 00:20:07.354 } 00:20:07.354 ] 00:20:07.354 }, 00:20:07.354 { 00:20:07.354 "subsystem": "vmd", 00:20:07.354 "config": [] 00:20:07.354 }, 00:20:07.354 { 00:20:07.354 "subsystem": "accel", 00:20:07.354 "config": [ 00:20:07.354 { 00:20:07.354 "method": "accel_set_options", 00:20:07.354 "params": { 00:20:07.354 "small_cache_size": 128, 00:20:07.354 "large_cache_size": 16, 00:20:07.354 "task_count": 2048, 00:20:07.354 "sequence_count": 2048, 00:20:07.354 "buf_count": 2048 00:20:07.354 } 00:20:07.354 } 00:20:07.354 ] 00:20:07.354 }, 00:20:07.354 { 00:20:07.354 "subsystem": "bdev", 00:20:07.354 "config": [ 00:20:07.354 { 00:20:07.354 "method": "bdev_set_options", 00:20:07.354 "params": { 00:20:07.354 "bdev_io_pool_size": 65535, 00:20:07.354 "bdev_io_cache_size": 256, 00:20:07.354 "bdev_auto_examine": true, 00:20:07.354 "iobuf_small_cache_size": 128, 00:20:07.354 "iobuf_large_cache_size": 16 00:20:07.354 } 00:20:07.354 }, 00:20:07.354 { 00:20:07.354 "method": "bdev_raid_set_options", 00:20:07.354 "params": { 00:20:07.354 "process_window_size_kb": 1024, 00:20:07.354 "process_max_bandwidth_mb_sec": 0 00:20:07.354 } 00:20:07.354 }, 00:20:07.354 { 00:20:07.354 "method": "bdev_iscsi_set_options", 00:20:07.354 "params": { 00:20:07.354 "timeout_sec": 30 00:20:07.354 } 00:20:07.354 }, 00:20:07.354 { 00:20:07.354 "method": "bdev_nvme_set_options", 00:20:07.354 "params": { 00:20:07.354 "action_on_timeout": "none", 00:20:07.354 "timeout_us": 0, 00:20:07.354 "timeout_admin_us": 0, 00:20:07.354 "keep_alive_timeout_ms": 10000, 00:20:07.354 "arbitration_burst": 0, 00:20:07.354 "low_priority_weight": 0, 00:20:07.354 "medium_priority_weight": 0, 00:20:07.354 "high_priority_weight": 0, 00:20:07.354 "nvme_adminq_poll_period_us": 10000, 00:20:07.354 "nvme_ioq_poll_period_us": 0, 00:20:07.354 "io_queue_requests": 512, 00:20:07.354 "delay_cmd_submit": true, 00:20:07.354 "transport_retry_count": 4, 00:20:07.354 "bdev_retry_count": 3, 00:20:07.354 "transport_ack_timeout": 0, 00:20:07.354 "ctrlr_loss_timeout_sec": 0, 00:20:07.354 "reconnect_delay_sec": 0, 00:20:07.354 "fast_io_fail_timeout_sec": 0, 00:20:07.354 "disable_auto_failback": false, 00:20:07.354 "generate_uuids": false, 00:20:07.354 "transport_tos": 0, 00:20:07.354 "nvme_error_stat": false, 00:20:07.354 "rdma_srq_size": 0, 00:20:07.354 "io_path_stat": false, 00:20:07.354 "allow_accel_sequence": false, 00:20:07.354 "rdma_max_cq_size": 0, 00:20:07.354 "rdma_cm_event_timeout_ms": 0, 00:20:07.354 "dhchap_digests": [ 00:20:07.354 "sha256", 00:20:07.354 "sha384", 00:20:07.354 "sha512" 00:20:07.354 ], 00:20:07.354 "dhchap_dhgroups": [ 00:20:07.354 "null", 00:20:07.354 "ffdhe2048", 00:20:07.354 "ffdhe3072", 00:20:07.354 "ffdhe4096", 00:20:07.354 "ffdhe6144", 00:20:07.354 "ffdhe8192" 00:20:07.354 ] 00:20:07.354 } 00:20:07.354 }, 00:20:07.354 { 00:20:07.354 "method": "bdev_nvme_attach_controller", 00:20:07.354 "params": { 00:20:07.354 "name": "nvme0", 00:20:07.354 "trtype": "TCP", 00:20:07.354 "adrfam": "IPv4", 00:20:07.354 "traddr": "10.0.0.2", 00:20:07.354 "trsvcid": "4420", 00:20:07.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.354 "prchk_reftag": false, 00:20:07.354 "prchk_guard": false, 00:20:07.354 "ctrlr_loss_timeout_sec": 0, 00:20:07.354 "reconnect_delay_sec": 0, 00:20:07.354 "fast_io_fail_timeout_sec": 0, 00:20:07.354 "psk": "key0", 00:20:07.354 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:07.354 "hdgst": false, 00:20:07.354 "ddgst": false, 00:20:07.354 "multipath": "multipath" 00:20:07.354 } 00:20:07.354 }, 00:20:07.354 { 00:20:07.355 "method": "bdev_nvme_set_hotplug", 00:20:07.355 "params": { 00:20:07.355 "period_us": 100000, 00:20:07.355 "enable": false 00:20:07.355 } 00:20:07.355 }, 00:20:07.355 { 00:20:07.355 "method": "bdev_enable_histogram", 00:20:07.355 "params": { 00:20:07.355 "name": "nvme0n1", 00:20:07.355 "enable": true 00:20:07.355 } 00:20:07.355 }, 00:20:07.355 { 00:20:07.355 "method": "bdev_wait_for_examine" 00:20:07.355 } 00:20:07.355 ] 00:20:07.355 }, 00:20:07.355 { 00:20:07.355 "subsystem": "nbd", 00:20:07.355 "config": [] 00:20:07.355 } 00:20:07.355 ] 00:20:07.355 }' 00:20:07.355 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1050550 00:20:07.355 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1050550 ']' 00:20:07.355 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1050550 00:20:07.355 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:07.355 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:07.355 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1050550 00:20:07.615 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:07.615 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:07.615 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1050550' 00:20:07.615 killing process with pid 1050550 00:20:07.615 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1050550 00:20:07.615 Received shutdown signal, test time was about 1.000000 seconds 00:20:07.615 00:20:07.615 Latency(us) 00:20:07.615 [2024-10-30T13:07:05.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.615 [2024-10-30T13:07:05.915Z] =================================================================================================================== 00:20:07.616 [2024-10-30T13:07:05.915Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:07.616 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1050550 00:20:07.616 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1050331 00:20:07.616 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1050331 ']' 00:20:07.616 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1050331 00:20:07.616 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:07.616 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:07.616 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1050331 00:20:07.616 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:07.616 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:07.616 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1050331' 00:20:07.616 killing process with pid 1050331 00:20:07.616 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1050331 00:20:07.616 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1050331 00:20:07.877 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:07.877 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:07.877 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:07.877 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:07.877 "subsystems": [ 00:20:07.877 { 00:20:07.877 "subsystem": "keyring", 00:20:07.877 "config": [ 00:20:07.877 { 00:20:07.877 "method": "keyring_file_add_key", 00:20:07.877 "params": { 00:20:07.877 "name": "key0", 00:20:07.877 "path": "/tmp/tmp.EVxctdrZzx" 00:20:07.877 } 00:20:07.877 } 00:20:07.877 ] 00:20:07.877 }, 00:20:07.877 { 00:20:07.877 "subsystem": "iobuf", 00:20:07.877 "config": [ 00:20:07.877 { 00:20:07.877 "method": "iobuf_set_options", 00:20:07.877 "params": { 00:20:07.877 "small_pool_count": 8192, 00:20:07.877 "large_pool_count": 1024, 00:20:07.877 "small_bufsize": 8192, 00:20:07.877 "large_bufsize": 135168, 00:20:07.877 "enable_numa": false 00:20:07.877 } 00:20:07.877 } 00:20:07.877 ] 00:20:07.877 }, 00:20:07.877 { 00:20:07.877 "subsystem": "sock", 00:20:07.877 "config": [ 00:20:07.877 { 00:20:07.877 "method": "sock_set_default_impl", 00:20:07.877 "params": { 00:20:07.877 "impl_name": "posix" 00:20:07.877 } 00:20:07.877 }, 00:20:07.877 { 00:20:07.877 "method": "sock_impl_set_options", 00:20:07.877 "params": { 00:20:07.877 "impl_name": "ssl", 00:20:07.877 "recv_buf_size": 4096, 00:20:07.877 "send_buf_size": 4096, 00:20:07.877 "enable_recv_pipe": true, 00:20:07.877 "enable_quickack": false, 00:20:07.877 "enable_placement_id": 0, 00:20:07.877 "enable_zerocopy_send_server": true, 00:20:07.877 "enable_zerocopy_send_client": false, 00:20:07.877 "zerocopy_threshold": 0, 00:20:07.877 "tls_version": 0, 00:20:07.877 "enable_ktls": false 00:20:07.877 } 00:20:07.877 }, 00:20:07.877 { 00:20:07.877 "method": "sock_impl_set_options", 00:20:07.877 "params": { 00:20:07.877 "impl_name": "posix", 00:20:07.877 "recv_buf_size": 2097152, 00:20:07.877 "send_buf_size": 2097152, 00:20:07.877 "enable_recv_pipe": true, 00:20:07.877 "enable_quickack": false, 00:20:07.877 "enable_placement_id": 0, 00:20:07.877 "enable_zerocopy_send_server": true, 00:20:07.877 "enable_zerocopy_send_client": false, 00:20:07.877 "zerocopy_threshold": 0, 00:20:07.877 "tls_version": 0, 00:20:07.877 "enable_ktls": false 00:20:07.877 } 00:20:07.877 } 00:20:07.877 ] 00:20:07.877 }, 00:20:07.877 { 00:20:07.877 "subsystem": "vmd", 00:20:07.877 "config": [] 00:20:07.877 }, 00:20:07.877 { 00:20:07.877 "subsystem": "accel", 00:20:07.877 "config": [ 00:20:07.877 { 00:20:07.877 "method": "accel_set_options", 00:20:07.877 "params": { 00:20:07.877 "small_cache_size": 128, 00:20:07.877 "large_cache_size": 16, 00:20:07.877 "task_count": 2048, 00:20:07.877 "sequence_count": 2048, 00:20:07.877 "buf_count": 2048 00:20:07.877 } 00:20:07.877 } 00:20:07.877 ] 00:20:07.877 }, 00:20:07.877 { 00:20:07.877 "subsystem": "bdev", 00:20:07.877 "config": [ 00:20:07.877 { 00:20:07.877 "method": "bdev_set_options", 00:20:07.877 "params": { 00:20:07.877 "bdev_io_pool_size": 65535, 00:20:07.877 "bdev_io_cache_size": 256, 00:20:07.877 "bdev_auto_examine": true, 00:20:07.877 "iobuf_small_cache_size": 128, 00:20:07.877 "iobuf_large_cache_size": 16 00:20:07.877 } 00:20:07.877 }, 00:20:07.877 { 00:20:07.878 "method": "bdev_raid_set_options", 00:20:07.878 "params": { 00:20:07.878 "process_window_size_kb": 1024, 00:20:07.878 "process_max_bandwidth_mb_sec": 0 00:20:07.878 } 00:20:07.878 }, 00:20:07.878 { 00:20:07.878 "method": "bdev_iscsi_set_options", 00:20:07.878 "params": { 00:20:07.878 "timeout_sec": 30 00:20:07.878 } 00:20:07.878 }, 00:20:07.878 { 00:20:07.878 "method": "bdev_nvme_set_options", 00:20:07.878 "params": { 00:20:07.878 "action_on_timeout": "none", 00:20:07.878 "timeout_us": 0, 00:20:07.878 "timeout_admin_us": 0, 00:20:07.878 "keep_alive_timeout_ms": 10000, 00:20:07.878 "arbitration_burst": 0, 00:20:07.878 "low_priority_weight": 0, 00:20:07.878 "medium_priority_weight": 0, 00:20:07.878 "high_priority_weight": 0, 00:20:07.878 "nvme_adminq_poll_period_us": 10000, 00:20:07.878 "nvme_ioq_poll_period_us": 0, 00:20:07.878 "io_queue_requests": 0, 00:20:07.878 "delay_cmd_submit": true, 00:20:07.878 "transport_retry_count": 4, 00:20:07.878 "bdev_retry_count": 3, 00:20:07.878 "transport_ack_timeout": 0, 00:20:07.878 "ctrlr_loss_timeout_sec": 0, 00:20:07.878 "reconnect_delay_sec": 0, 00:20:07.878 "fast_io_fail_timeout_sec": 0, 00:20:07.878 "disable_auto_failback": false, 00:20:07.878 "generate_uuids": false, 00:20:07.878 "transport_tos": 0, 00:20:07.878 "nvme_error_stat": false, 00:20:07.878 "rdma_srq_size": 0, 00:20:07.878 "io_path_stat": false, 00:20:07.878 "allow_accel_sequence": false, 00:20:07.878 "rdma_max_cq_size": 0, 00:20:07.878 "rdma_cm_event_timeout_ms": 0, 00:20:07.878 "dhchap_digests": [ 00:20:07.878 "sha256", 00:20:07.878 "sha384", 00:20:07.878 "sha512" 00:20:07.878 ], 00:20:07.878 "dhchap_dhgroups": [ 00:20:07.878 "null", 00:20:07.878 "ffdhe2048", 00:20:07.878 "ffdhe3072", 00:20:07.878 "ffdhe4096", 00:20:07.878 "ffdhe6144", 00:20:07.878 "ffdhe8192" 00:20:07.878 ] 00:20:07.878 } 00:20:07.878 }, 00:20:07.878 { 00:20:07.878 "method": "bdev_nvme_set_hotplug", 00:20:07.878 "params": { 00:20:07.878 "period_us": 100000, 00:20:07.878 "enable": false 00:20:07.878 } 00:20:07.878 }, 00:20:07.878 { 00:20:07.878 "method": "bdev_malloc_create", 00:20:07.878 "params": { 00:20:07.878 "name": "malloc0", 00:20:07.878 "num_blocks": 8192, 00:20:07.878 "block_size": 4096, 00:20:07.878 "physical_block_size": 4096, 00:20:07.878 "uuid": "0c2ec783-f7d0-4b5e-8e80-0f4b2e47a870", 00:20:07.878 "optimal_io_boundary": 0, 00:20:07.878 "md_size": 0, 00:20:07.878 "dif_type": 0, 00:20:07.878 "dif_is_head_of_md": false, 00:20:07.878 "dif_pi_format": 0 00:20:07.878 } 00:20:07.878 }, 00:20:07.878 { 00:20:07.878 "method": "bdev_wait_for_examine" 00:20:07.878 } 00:20:07.878 ] 00:20:07.878 }, 00:20:07.878 { 00:20:07.878 "subsystem": "nbd", 00:20:07.878 "config": [] 00:20:07.878 }, 00:20:07.878 { 00:20:07.878 "subsystem": "scheduler", 00:20:07.878 "config": [ 00:20:07.878 { 00:20:07.878 "method": "framework_set_scheduler", 00:20:07.878 "params": { 00:20:07.878 "name": "static" 00:20:07.878 } 00:20:07.878 } 00:20:07.878 ] 00:20:07.878 }, 00:20:07.878 { 00:20:07.878 "subsystem": "nvmf", 00:20:07.878 "config": [ 00:20:07.878 { 00:20:07.878 "method": "nvmf_set_config", 00:20:07.878 "params": { 00:20:07.878 "discovery_filter": "match_any", 00:20:07.878 "admin_cmd_passthru": { 00:20:07.878 "identify_ctrlr": false 00:20:07.878 }, 00:20:07.878 "dhchap_digests": [ 00:20:07.878 "sha256", 00:20:07.878 "sha384", 00:20:07.878 "sha512" 00:20:07.878 ], 00:20:07.878 "dhchap_dhgroups": [ 00:20:07.878 "null", 00:20:07.878 "ffdhe2048", 00:20:07.878 "ffdhe3072", 00:20:07.878 "ffdhe4096", 00:20:07.878 "ffdhe6144", 00:20:07.878 "ffdhe8192" 00:20:07.878 ] 00:20:07.878 } 00:20:07.878 }, 00:20:07.878 { 00:20:07.878 "method": "nvmf_set_max_subsystems", 00:20:07.878 "params": { 00:20:07.878 "max_subsystems": 1024 00:20:07.878 } 00:20:07.878 }, 00:20:07.878 { 00:20:07.878 "method": "nvmf_set_crdt", 00:20:07.878 "params": { 00:20:07.878 "crdt1": 0, 00:20:07.878 "crdt2": 0, 00:20:07.878 "crdt3": 0 00:20:07.878 } 00:20:07.878 }, 00:20:07.878 { 00:20:07.878 "method": "nvmf_create_transport", 00:20:07.878 "params": { 00:20:07.878 "trtype": "TCP", 00:20:07.878 "max_queue_depth": 128, 00:20:07.878 "max_io_qpairs_per_ctrlr": 127, 00:20:07.878 "in_capsule_data_size": 4096, 00:20:07.878 "max_io_size": 131072, 00:20:07.878 "io_unit_size": 131072, 00:20:07.878 "max_aq_depth": 128, 00:20:07.878 "num_shared_buffers": 511, 00:20:07.878 "buf_cache_size": 4294967295, 00:20:07.878 "dif_insert_or_strip": false, 00:20:07.878 "zcopy": false, 00:20:07.878 "c2h_success": false, 00:20:07.878 "sock_priority": 0, 00:20:07.878 "abort_timeout_sec": 1, 00:20:07.878 "ack_timeout": 0, 00:20:07.878 "data_wr_pool_size": 0 00:20:07.878 } 00:20:07.878 }, 00:20:07.878 { 00:20:07.878 "method": "nvmf_create_subsystem", 00:20:07.878 "params": { 00:20:07.878 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.878 "allow_any_host": false, 00:20:07.878 "serial_number": "00000000000000000000", 00:20:07.878 "model_number": "SPDK bdev Controller", 00:20:07.878 "max_namespaces": 32, 00:20:07.878 "min_cntlid": 1, 00:20:07.878 "max_cntlid": 65519, 00:20:07.878 "ana_reporting": false 00:20:07.878 } 00:20:07.878 }, 00:20:07.878 { 00:20:07.878 "method": "nvmf_subsystem_add_host", 00:20:07.878 "params": { 00:20:07.878 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.878 "host": "nqn.2016-06.io.spdk:host1", 00:20:07.878 "psk": "key0" 00:20:07.878 } 00:20:07.878 }, 00:20:07.878 { 00:20:07.878 "method": "nvmf_subsystem_add_ns", 00:20:07.878 "params": { 00:20:07.878 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.878 "namespace": { 00:20:07.878 "nsid": 1, 00:20:07.878 "bdev_name": "malloc0", 00:20:07.878 "nguid": "0C2EC783F7D04B5E8E800F4B2E47A870", 00:20:07.878 "uuid": "0c2ec783-f7d0-4b5e-8e80-0f4b2e47a870", 00:20:07.878 "no_auto_visible": false 00:20:07.878 } 00:20:07.878 } 00:20:07.878 }, 00:20:07.878 { 00:20:07.878 "method": "nvmf_subsystem_add_listener", 00:20:07.878 "params": { 00:20:07.878 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.878 "listen_address": { 00:20:07.878 "trtype": "TCP", 00:20:07.878 "adrfam": "IPv4", 00:20:07.878 "traddr": "10.0.0.2", 00:20:07.878 "trsvcid": "4420" 00:20:07.878 }, 00:20:07.878 "secure_channel": false, 00:20:07.878 "sock_impl": "ssl" 00:20:07.878 } 00:20:07.878 } 00:20:07.878 ] 00:20:07.878 } 00:20:07.878 ] 00:20:07.878 }' 00:20:07.878 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.878 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1051066 00:20:07.878 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1051066 00:20:07.878 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:07.878 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1051066 ']' 00:20:07.878 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.878 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.878 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.878 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.878 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.878 [2024-10-30 14:07:06.024657] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:20:07.878 [2024-10-30 14:07:06.024712] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.878 [2024-10-30 14:07:06.115025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.878 [2024-10-30 14:07:06.143356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.878 [2024-10-30 14:07:06.143386] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.878 [2024-10-30 14:07:06.143392] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.878 [2024-10-30 14:07:06.143396] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.878 [2024-10-30 14:07:06.143401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.878 [2024-10-30 14:07:06.143916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.140 [2024-10-30 14:07:06.337019] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.140 [2024-10-30 14:07:06.369052] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:08.140 [2024-10-30 14:07:06.369228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.712 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.712 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:08.712 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:08.712 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:08.712 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.712 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.712 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1051391 00:20:08.712 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1051391 /var/tmp/bdevperf.sock 00:20:08.712 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1051391 ']' 00:20:08.712 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:08.712 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.712 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:08.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:08.712 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:08.712 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.712 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.712 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:08.712 "subsystems": [ 00:20:08.712 { 00:20:08.712 "subsystem": "keyring", 00:20:08.712 "config": [ 00:20:08.712 { 00:20:08.712 "method": "keyring_file_add_key", 00:20:08.712 "params": { 00:20:08.712 "name": "key0", 00:20:08.712 "path": "/tmp/tmp.EVxctdrZzx" 00:20:08.712 } 00:20:08.712 } 00:20:08.712 ] 00:20:08.712 }, 00:20:08.712 { 00:20:08.712 "subsystem": "iobuf", 00:20:08.712 "config": [ 00:20:08.712 { 00:20:08.712 "method": "iobuf_set_options", 00:20:08.712 "params": { 00:20:08.712 "small_pool_count": 8192, 00:20:08.712 "large_pool_count": 1024, 00:20:08.712 "small_bufsize": 8192, 00:20:08.712 "large_bufsize": 135168, 00:20:08.712 "enable_numa": false 00:20:08.712 } 00:20:08.712 } 00:20:08.712 ] 00:20:08.712 }, 00:20:08.712 { 00:20:08.712 "subsystem": "sock", 00:20:08.712 "config": [ 00:20:08.712 { 00:20:08.712 "method": "sock_set_default_impl", 00:20:08.712 "params": { 00:20:08.713 "impl_name": "posix" 00:20:08.713 } 00:20:08.713 }, 00:20:08.713 { 00:20:08.713 "method": "sock_impl_set_options", 00:20:08.713 "params": { 00:20:08.713 "impl_name": "ssl", 00:20:08.713 "recv_buf_size": 4096, 00:20:08.713 "send_buf_size": 4096, 00:20:08.713 "enable_recv_pipe": true, 00:20:08.713 "enable_quickack": false, 00:20:08.713 "enable_placement_id": 0, 00:20:08.713 "enable_zerocopy_send_server": true, 00:20:08.713 "enable_zerocopy_send_client": false, 00:20:08.713 "zerocopy_threshold": 0, 00:20:08.713 "tls_version": 0, 00:20:08.713 "enable_ktls": false 00:20:08.713 } 00:20:08.713 }, 00:20:08.713 { 00:20:08.713 "method": "sock_impl_set_options", 00:20:08.713 "params": { 00:20:08.713 "impl_name": "posix", 00:20:08.713 "recv_buf_size": 2097152, 00:20:08.713 "send_buf_size": 2097152, 00:20:08.713 "enable_recv_pipe": true, 00:20:08.713 "enable_quickack": false, 00:20:08.713 "enable_placement_id": 0, 00:20:08.713 "enable_zerocopy_send_server": true, 00:20:08.713 "enable_zerocopy_send_client": false, 00:20:08.713 "zerocopy_threshold": 0, 00:20:08.713 "tls_version": 0, 00:20:08.713 "enable_ktls": false 00:20:08.713 } 00:20:08.713 } 00:20:08.713 ] 00:20:08.713 }, 00:20:08.713 { 00:20:08.713 "subsystem": "vmd", 00:20:08.713 "config": [] 00:20:08.713 }, 00:20:08.713 { 00:20:08.713 "subsystem": "accel", 00:20:08.713 "config": [ 00:20:08.713 { 00:20:08.713 "method": "accel_set_options", 00:20:08.713 "params": { 00:20:08.713 "small_cache_size": 128, 00:20:08.713 "large_cache_size": 16, 00:20:08.713 "task_count": 2048, 00:20:08.713 "sequence_count": 2048, 00:20:08.713 "buf_count": 2048 00:20:08.713 } 00:20:08.713 } 00:20:08.713 ] 00:20:08.713 }, 00:20:08.713 { 00:20:08.713 "subsystem": "bdev", 00:20:08.713 "config": [ 00:20:08.713 { 00:20:08.713 "method": "bdev_set_options", 00:20:08.713 "params": { 00:20:08.713 "bdev_io_pool_size": 65535, 00:20:08.713 "bdev_io_cache_size": 256, 00:20:08.713 "bdev_auto_examine": true, 00:20:08.713 "iobuf_small_cache_size": 128, 00:20:08.713 "iobuf_large_cache_size": 16 00:20:08.713 } 00:20:08.713 }, 00:20:08.713 { 00:20:08.713 "method": "bdev_raid_set_options", 00:20:08.713 "params": { 00:20:08.713 "process_window_size_kb": 1024, 00:20:08.713 "process_max_bandwidth_mb_sec": 0 00:20:08.713 } 00:20:08.713 }, 00:20:08.713 { 00:20:08.713 "method": "bdev_iscsi_set_options", 00:20:08.713 "params": { 00:20:08.713 "timeout_sec": 30 00:20:08.713 } 00:20:08.713 }, 00:20:08.713 { 00:20:08.713 "method": "bdev_nvme_set_options", 00:20:08.713 "params": { 00:20:08.713 "action_on_timeout": "none", 00:20:08.713 "timeout_us": 0, 00:20:08.713 "timeout_admin_us": 0, 00:20:08.713 "keep_alive_timeout_ms": 10000, 00:20:08.713 "arbitration_burst": 0, 00:20:08.713 "low_priority_weight": 0, 00:20:08.713 "medium_priority_weight": 0, 00:20:08.713 "high_priority_weight": 0, 00:20:08.713 "nvme_adminq_poll_period_us": 10000, 00:20:08.713 "nvme_ioq_poll_period_us": 0, 00:20:08.713 "io_queue_requests": 512, 00:20:08.713 "delay_cmd_submit": true, 00:20:08.713 "transport_retry_count": 4, 00:20:08.713 "bdev_retry_count": 3, 00:20:08.713 "transport_ack_timeout": 0, 00:20:08.713 "ctrlr_loss_timeout_sec": 0, 00:20:08.713 "reconnect_delay_sec": 0, 00:20:08.713 "fast_io_fail_timeout_sec": 0, 00:20:08.713 "disable_auto_failback": false, 00:20:08.713 "generate_uuids": false, 00:20:08.713 "transport_tos": 0, 00:20:08.713 "nvme_error_stat": false, 00:20:08.713 "rdma_srq_size": 0, 00:20:08.713 "io_path_stat": false, 00:20:08.713 "allow_accel_sequence": false, 00:20:08.713 "rdma_max_cq_size": 0, 00:20:08.713 "rdma_cm_event_timeout_ms": 0, 00:20:08.713 "dhchap_digests": [ 00:20:08.713 "sha256", 00:20:08.713 "sha384", 00:20:08.713 "sha512" 00:20:08.713 ], 00:20:08.713 "dhchap_dhgroups": [ 00:20:08.713 "null", 00:20:08.713 "ffdhe2048", 00:20:08.713 "ffdhe3072", 00:20:08.713 "ffdhe4096", 00:20:08.713 "ffdhe6144", 00:20:08.713 "ffdhe8192" 00:20:08.713 ] 00:20:08.713 } 00:20:08.713 }, 00:20:08.713 { 00:20:08.713 "method": "bdev_nvme_attach_controller", 00:20:08.713 "params": { 00:20:08.713 "name": "nvme0", 00:20:08.713 "trtype": "TCP", 00:20:08.713 "adrfam": "IPv4", 00:20:08.713 "traddr": "10.0.0.2", 00:20:08.713 "trsvcid": "4420", 00:20:08.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.713 "prchk_reftag": false, 00:20:08.713 "prchk_guard": false, 00:20:08.713 "ctrlr_loss_timeout_sec": 0, 00:20:08.713 "reconnect_delay_sec": 0, 00:20:08.713 "fast_io_fail_timeout_sec": 0, 00:20:08.713 "psk": "key0", 00:20:08.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:08.713 "hdgst": false, 00:20:08.713 "ddgst": false, 00:20:08.713 "multipath": "multipath" 00:20:08.713 } 00:20:08.713 }, 00:20:08.713 { 00:20:08.713 "method": "bdev_nvme_set_hotplug", 00:20:08.713 "params": { 00:20:08.713 "period_us": 100000, 00:20:08.713 "enable": false 00:20:08.713 } 00:20:08.713 }, 00:20:08.713 { 00:20:08.713 "method": "bdev_enable_histogram", 00:20:08.713 "params": { 00:20:08.713 "name": "nvme0n1", 00:20:08.713 "enable": true 00:20:08.713 } 00:20:08.713 }, 00:20:08.713 { 00:20:08.713 "method": "bdev_wait_for_examine" 00:20:08.713 } 00:20:08.713 ] 00:20:08.713 }, 00:20:08.713 { 00:20:08.713 "subsystem": "nbd", 00:20:08.713 "config": [] 00:20:08.713 } 00:20:08.713 ] 00:20:08.713 }' 00:20:08.713 [2024-10-30 14:07:06.908216] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:20:08.713 [2024-10-30 14:07:06.908269] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1051391 ] 00:20:08.713 [2024-10-30 14:07:06.991432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.974 [2024-10-30 14:07:07.020721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.974 [2024-10-30 14:07:07.155303] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:09.545 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.545 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:09.545 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:09.545 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:09.806 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.806 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:09.806 Running I/O for 1 seconds... 00:20:10.746 4056.00 IOPS, 15.84 MiB/s 00:20:10.746 Latency(us) 00:20:10.746 [2024-10-30T13:07:09.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.746 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:10.746 Verification LBA range: start 0x0 length 0x2000 00:20:10.746 nvme0n1 : 1.04 4019.89 15.70 0.00 0.00 31272.71 4532.91 109226.67 00:20:10.746 [2024-10-30T13:07:09.045Z] =================================================================================================================== 00:20:10.746 [2024-10-30T13:07:09.045Z] Total : 4019.89 15.70 0.00 0.00 31272.71 4532.91 109226.67 00:20:10.746 { 00:20:10.746 "results": [ 00:20:10.746 { 00:20:10.746 "job": "nvme0n1", 00:20:10.746 "core_mask": "0x2", 00:20:10.746 "workload": "verify", 00:20:10.746 "status": "finished", 00:20:10.746 "verify_range": { 00:20:10.746 "start": 0, 00:20:10.746 "length": 8192 00:20:10.746 }, 00:20:10.746 "queue_depth": 128, 00:20:10.746 "io_size": 4096, 00:20:10.746 "runtime": 1.040825, 00:20:10.746 "iops": 4019.8880695602047, 00:20:10.746 "mibps": 15.70268777171955, 00:20:10.746 "io_failed": 0, 00:20:10.746 "io_timeout": 0, 00:20:10.746 "avg_latency_us": 31272.711994901212, 00:20:10.746 "min_latency_us": 4532.906666666667, 00:20:10.746 "max_latency_us": 109226.66666666667 00:20:10.746 } 00:20:10.746 ], 00:20:10.746 "core_count": 1 00:20:10.746 } 00:20:11.007 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:11.007 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:11.007 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:11.007 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:11.007 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:11.007 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:11.007 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:11.007 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:11.007 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:11.007 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:11.007 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:11.007 nvmf_trace.0 00:20:11.007 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:11.007 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1051391 00:20:11.007 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1051391 ']' 00:20:11.007 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1051391 00:20:11.007 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:11.007 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:11.007 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1051391 00:20:11.008 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:11.008 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:11.008 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1051391' 00:20:11.008 killing process with pid 1051391 00:20:11.008 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1051391 00:20:11.008 Received shutdown signal, test time was about 1.000000 seconds 00:20:11.008 00:20:11.008 Latency(us) 00:20:11.008 [2024-10-30T13:07:09.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.008 [2024-10-30T13:07:09.307Z] =================================================================================================================== 00:20:11.008 [2024-10-30T13:07:09.307Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:11.008 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1051391 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:11.269 rmmod nvme_tcp 00:20:11.269 rmmod nvme_fabrics 00:20:11.269 rmmod nvme_keyring 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1051066 ']' 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1051066 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1051066 ']' 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1051066 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1051066 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1051066' 00:20:11.269 killing process with pid 1051066 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1051066 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1051066 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:11.269 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:11.530 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:11.530 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:11.530 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:11.530 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:11.530 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:11.530 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.530 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.531 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.444 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:13.444 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.0lxtGtfq34 /tmp/tmp.Ha9A2hzU38 /tmp/tmp.EVxctdrZzx 00:20:13.444 00:20:13.444 real 1m27.669s 00:20:13.444 user 2m18.408s 00:20:13.444 sys 0m27.137s 00:20:13.444 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:13.444 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.444 ************************************ 00:20:13.444 END TEST nvmf_tls 00:20:13.444 ************************************ 00:20:13.444 14:07:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:13.444 14:07:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:13.444 14:07:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:13.444 14:07:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:13.444 ************************************ 00:20:13.444 START TEST nvmf_fips 00:20:13.444 ************************************ 00:20:13.444 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:13.706 * Looking for test storage... 00:20:13.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:13.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.706 --rc genhtml_branch_coverage=1 00:20:13.706 --rc genhtml_function_coverage=1 00:20:13.706 --rc genhtml_legend=1 00:20:13.706 --rc geninfo_all_blocks=1 00:20:13.706 --rc geninfo_unexecuted_blocks=1 00:20:13.706 00:20:13.706 ' 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:13.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.706 --rc genhtml_branch_coverage=1 00:20:13.706 --rc genhtml_function_coverage=1 00:20:13.706 --rc genhtml_legend=1 00:20:13.706 --rc geninfo_all_blocks=1 00:20:13.706 --rc geninfo_unexecuted_blocks=1 00:20:13.706 00:20:13.706 ' 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:13.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.706 --rc genhtml_branch_coverage=1 00:20:13.706 --rc genhtml_function_coverage=1 00:20:13.706 --rc genhtml_legend=1 00:20:13.706 --rc geninfo_all_blocks=1 00:20:13.706 --rc geninfo_unexecuted_blocks=1 00:20:13.706 00:20:13.706 ' 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:13.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.706 --rc genhtml_branch_coverage=1 00:20:13.706 --rc genhtml_function_coverage=1 00:20:13.706 --rc genhtml_legend=1 00:20:13.706 --rc geninfo_all_blocks=1 00:20:13.706 --rc geninfo_unexecuted_blocks=1 00:20:13.706 00:20:13.706 ' 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:13.706 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:13.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:13.707 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:13.707 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:13.707 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:13.707 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:13.968 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:13.968 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:13.968 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:13.968 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:13.968 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:13.968 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:13.968 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:13.968 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:13.968 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:13.968 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:13.968 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:13.968 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:13.968 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:13.968 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:13.968 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:13.968 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:13.968 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:13.968 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:13.969 Error setting digest 00:20:13.969 40722136887F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:13.969 40722136887F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:13.969 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:22.115 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:22.115 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:22.115 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:22.116 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:22.116 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:22.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:22.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:20:22.116 00:20:22.116 --- 10.0.0.2 ping statistics --- 00:20:22.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.116 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:22.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:22.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:20:22.116 00:20:22.116 --- 10.0.0.1 ping statistics --- 00:20:22.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.116 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1056096 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1056096 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1056096 ']' 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:22.116 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:22.116 [2024-10-30 14:07:19.497028] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:20:22.116 [2024-10-30 14:07:19.497080] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.116 [2024-10-30 14:07:19.592119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.116 [2024-10-30 14:07:19.625788] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.116 [2024-10-30 14:07:19.625821] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.116 [2024-10-30 14:07:19.625829] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.116 [2024-10-30 14:07:19.625836] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.116 [2024-10-30 14:07:19.625842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.116 [2024-10-30 14:07:19.626408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.116 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.116 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:22.116 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:22.116 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:22.116 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:22.116 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.116 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:22.116 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:22.116 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:22.116 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Psz 00:20:22.116 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:22.116 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Psz 00:20:22.116 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Psz 00:20:22.116 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Psz 00:20:22.116 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:22.378 [2024-10-30 14:07:20.514589] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.378 [2024-10-30 14:07:20.530597] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:22.378 [2024-10-30 14:07:20.530969] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.378 malloc0 00:20:22.378 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:22.378 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1056366 00:20:22.378 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1056366 /var/tmp/bdevperf.sock 00:20:22.378 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:22.378 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1056366 ']' 00:20:22.378 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:22.378 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:22.378 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:22.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:22.378 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:22.378 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:22.378 [2024-10-30 14:07:20.672522] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:20:22.378 [2024-10-30 14:07:20.672600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1056366 ] 00:20:22.639 [2024-10-30 14:07:20.764713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.639 [2024-10-30 14:07:20.813923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.213 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.213 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:23.213 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Psz 00:20:23.475 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:23.736 [2024-10-30 14:07:21.786881] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.736 TLSTESTn1 00:20:23.736 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:23.736 Running I/O for 10 seconds... 00:20:25.694 3539.00 IOPS, 13.82 MiB/s [2024-10-30T13:07:25.375Z] 4464.50 IOPS, 17.44 MiB/s [2024-10-30T13:07:26.316Z] 4932.33 IOPS, 19.27 MiB/s [2024-10-30T13:07:27.256Z] 5258.00 IOPS, 20.54 MiB/s [2024-10-30T13:07:28.199Z] 5535.20 IOPS, 21.62 MiB/s [2024-10-30T13:07:29.140Z] 5638.83 IOPS, 22.03 MiB/s [2024-10-30T13:07:30.083Z] 5694.86 IOPS, 22.25 MiB/s [2024-10-30T13:07:31.024Z] 5768.25 IOPS, 22.53 MiB/s [2024-10-30T13:07:32.406Z] 5670.33 IOPS, 22.15 MiB/s [2024-10-30T13:07:32.406Z] 5700.70 IOPS, 22.27 MiB/s 00:20:34.107 Latency(us) 00:20:34.107 [2024-10-30T13:07:32.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.107 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:34.107 Verification LBA range: start 0x0 length 0x2000 00:20:34.107 TLSTESTn1 : 10.01 5705.74 22.29 0.00 0.00 22400.04 6007.47 56797.87 00:20:34.107 [2024-10-30T13:07:32.406Z] =================================================================================================================== 00:20:34.107 [2024-10-30T13:07:32.406Z] Total : 5705.74 22.29 0.00 0.00 22400.04 6007.47 56797.87 00:20:34.107 { 00:20:34.107 "results": [ 00:20:34.107 { 00:20:34.107 "job": "TLSTESTn1", 00:20:34.107 "core_mask": "0x4", 00:20:34.107 "workload": "verify", 00:20:34.107 "status": "finished", 00:20:34.107 "verify_range": { 00:20:34.107 "start": 0, 00:20:34.107 "length": 8192 00:20:34.107 }, 00:20:34.107 "queue_depth": 128, 00:20:34.107 "io_size": 4096, 00:20:34.107 "runtime": 10.013418, 00:20:34.107 "iops": 5705.744032656981, 00:20:34.107 "mibps": 22.288062627566333, 00:20:34.107 "io_failed": 0, 00:20:34.107 "io_timeout": 0, 00:20:34.107 "avg_latency_us": 22400.038295002392, 00:20:34.107 "min_latency_us": 6007.466666666666, 00:20:34.107 "max_latency_us": 56797.86666666667 00:20:34.107 } 00:20:34.107 ], 00:20:34.107 "core_count": 1 00:20:34.107 } 00:20:34.107 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:34.107 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:34.107 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:20:34.107 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:20:34.107 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:34.107 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:34.107 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:34.107 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:34.107 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:34.107 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:34.107 nvmf_trace.0 00:20:34.107 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:20:34.107 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1056366 00:20:34.107 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1056366 ']' 00:20:34.107 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1056366 00:20:34.107 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:34.107 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.107 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1056366 00:20:34.107 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:34.107 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:34.108 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1056366' 00:20:34.108 killing process with pid 1056366 00:20:34.108 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1056366 00:20:34.108 Received shutdown signal, test time was about 10.000000 seconds 00:20:34.108 00:20:34.108 Latency(us) 00:20:34.108 [2024-10-30T13:07:32.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.108 [2024-10-30T13:07:32.407Z] =================================================================================================================== 00:20:34.108 [2024-10-30T13:07:32.407Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:34.108 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1056366 00:20:34.108 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:34.108 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:34.108 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:34.108 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:34.108 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:34.108 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:34.108 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:34.108 rmmod nvme_tcp 00:20:34.108 rmmod nvme_fabrics 00:20:34.108 rmmod nvme_keyring 00:20:34.108 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:34.108 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:34.108 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:34.108 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1056096 ']' 00:20:34.108 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1056096 00:20:34.108 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1056096 ']' 00:20:34.108 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1056096 00:20:34.108 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:34.108 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.108 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1056096 00:20:34.368 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:34.368 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:34.368 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1056096' 00:20:34.368 killing process with pid 1056096 00:20:34.368 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1056096 00:20:34.368 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1056096 00:20:34.368 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:34.368 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:34.368 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:34.368 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:34.368 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:34.368 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:34.368 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:34.368 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:34.368 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:34.368 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.368 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.368 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Psz 00:20:36.911 00:20:36.911 real 0m22.905s 00:20:36.911 user 0m24.893s 00:20:36.911 sys 0m9.272s 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:36.911 ************************************ 00:20:36.911 END TEST nvmf_fips 00:20:36.911 ************************************ 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:36.911 ************************************ 00:20:36.911 START TEST nvmf_control_msg_list 00:20:36.911 ************************************ 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:36.911 * Looking for test storage... 00:20:36.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:36.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.911 --rc genhtml_branch_coverage=1 00:20:36.911 --rc genhtml_function_coverage=1 00:20:36.911 --rc genhtml_legend=1 00:20:36.911 --rc geninfo_all_blocks=1 00:20:36.911 --rc geninfo_unexecuted_blocks=1 00:20:36.911 00:20:36.911 ' 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:36.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.911 --rc genhtml_branch_coverage=1 00:20:36.911 --rc genhtml_function_coverage=1 00:20:36.911 --rc genhtml_legend=1 00:20:36.911 --rc geninfo_all_blocks=1 00:20:36.911 --rc geninfo_unexecuted_blocks=1 00:20:36.911 00:20:36.911 ' 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:36.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.911 --rc genhtml_branch_coverage=1 00:20:36.911 --rc genhtml_function_coverage=1 00:20:36.911 --rc genhtml_legend=1 00:20:36.911 --rc geninfo_all_blocks=1 00:20:36.911 --rc geninfo_unexecuted_blocks=1 00:20:36.911 00:20:36.911 ' 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:36.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.911 --rc genhtml_branch_coverage=1 00:20:36.911 --rc genhtml_function_coverage=1 00:20:36.911 --rc genhtml_legend=1 00:20:36.911 --rc geninfo_all_blocks=1 00:20:36.911 --rc geninfo_unexecuted_blocks=1 00:20:36.911 00:20:36.911 ' 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.911 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:36.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:36.912 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:45.053 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:45.053 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:45.053 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:45.053 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:45.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:45.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:20:45.053 00:20:45.053 --- 10.0.0.2 ping statistics --- 00:20:45.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.053 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:20:45.053 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:45.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:45.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:20:45.054 00:20:45.054 --- 10.0.0.1 ping statistics --- 00:20:45.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.054 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:20:45.054 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:45.054 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:45.054 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:45.054 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:45.054 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:45.054 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:45.054 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:45.054 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:45.054 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:45.054 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:45.054 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:45.054 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:45.054 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.054 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1062801 00:20:45.054 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1062801 00:20:45.054 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:45.054 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1062801 ']' 00:20:45.054 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.054 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.054 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.054 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.054 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.054 [2024-10-30 14:07:42.505174] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:20:45.054 [2024-10-30 14:07:42.505264] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.054 [2024-10-30 14:07:42.603475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.054 [2024-10-30 14:07:42.653732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.054 [2024-10-30 14:07:42.653790] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.054 [2024-10-30 14:07:42.653799] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.054 [2024-10-30 14:07:42.653806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.054 [2024-10-30 14:07:42.653812] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.054 [2024-10-30 14:07:42.654604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.054 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.054 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:45.054 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:45.054 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:45.054 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.054 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.316 [2024-10-30 14:07:43.356479] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.316 Malloc0 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.316 [2024-10-30 14:07:43.410893] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1062996 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1062998 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1062999 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1062996 00:20:45.316 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:45.316 [2024-10-30 14:07:43.511892] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:45.316 [2024-10-30 14:07:43.512229] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:45.316 [2024-10-30 14:07:43.512522] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:46.702 Initializing NVMe Controllers 00:20:46.702 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:46.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:46.702 Initialization complete. Launching workers. 00:20:46.702 ======================================================== 00:20:46.702 Latency(us) 00:20:46.702 Device Information : IOPS MiB/s Average min max 00:20:46.702 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1515.00 5.92 659.96 183.12 1083.61 00:20:46.702 ======================================================== 00:20:46.702 Total : 1515.00 5.92 659.96 183.12 1083.61 00:20:46.702 00:20:46.702 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1062998 00:20:46.702 Initializing NVMe Controllers 00:20:46.702 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:46.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:46.702 Initialization complete. Launching workers. 00:20:46.702 ======================================================== 00:20:46.702 Latency(us) 00:20:46.702 Device Information : IOPS MiB/s Average min max 00:20:46.702 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1608.00 6.28 621.94 285.58 1069.33 00:20:46.702 ======================================================== 00:20:46.702 Total : 1608.00 6.28 621.94 285.58 1069.33 00:20:46.702 00:20:46.702 Initializing NVMe Controllers 00:20:46.702 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:46.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:46.702 Initialization complete. Launching workers. 00:20:46.702 ======================================================== 00:20:46.702 Latency(us) 00:20:46.702 Device Information : IOPS MiB/s Average min max 00:20:46.703 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 52.00 0.20 19277.85 266.01 41417.19 00:20:46.703 ======================================================== 00:20:46.703 Total : 52.00 0.20 19277.85 266.01 41417.19 00:20:46.703 00:20:46.703 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1062999 00:20:46.703 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:46.703 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:46.703 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:46.703 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:46.703 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:46.703 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:46.703 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:46.703 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:46.703 rmmod nvme_tcp 00:20:46.703 rmmod nvme_fabrics 00:20:46.703 rmmod nvme_keyring 00:20:46.703 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:46.703 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:46.703 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:46.703 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1062801 ']' 00:20:46.703 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1062801 00:20:46.703 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1062801 ']' 00:20:46.703 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1062801 00:20:46.703 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:46.703 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.703 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1062801 00:20:46.703 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:46.703 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:46.703 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1062801' 00:20:46.703 killing process with pid 1062801 00:20:46.703 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1062801 00:20:46.703 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1062801 00:20:46.964 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:46.964 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:46.964 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:46.964 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:46.964 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:46.964 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:46.964 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:46.964 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:46.964 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:46.964 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.964 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.964 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.881 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:48.881 00:20:48.881 real 0m12.392s 00:20:48.881 user 0m7.973s 00:20:48.881 sys 0m6.524s 00:20:48.881 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:48.881 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:48.881 ************************************ 00:20:48.881 END TEST nvmf_control_msg_list 00:20:48.881 ************************************ 00:20:48.881 14:07:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:48.881 14:07:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:48.881 14:07:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:48.881 14:07:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:49.144 ************************************ 00:20:49.144 START TEST nvmf_wait_for_buf 00:20:49.144 ************************************ 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:49.144 * Looking for test storage... 00:20:49.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:49.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.144 --rc genhtml_branch_coverage=1 00:20:49.144 --rc genhtml_function_coverage=1 00:20:49.144 --rc genhtml_legend=1 00:20:49.144 --rc geninfo_all_blocks=1 00:20:49.144 --rc geninfo_unexecuted_blocks=1 00:20:49.144 00:20:49.144 ' 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:49.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.144 --rc genhtml_branch_coverage=1 00:20:49.144 --rc genhtml_function_coverage=1 00:20:49.144 --rc genhtml_legend=1 00:20:49.144 --rc geninfo_all_blocks=1 00:20:49.144 --rc geninfo_unexecuted_blocks=1 00:20:49.144 00:20:49.144 ' 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:49.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.144 --rc genhtml_branch_coverage=1 00:20:49.144 --rc genhtml_function_coverage=1 00:20:49.144 --rc genhtml_legend=1 00:20:49.144 --rc geninfo_all_blocks=1 00:20:49.144 --rc geninfo_unexecuted_blocks=1 00:20:49.144 00:20:49.144 ' 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:49.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.144 --rc genhtml_branch_coverage=1 00:20:49.144 --rc genhtml_function_coverage=1 00:20:49.144 --rc genhtml_legend=1 00:20:49.144 --rc geninfo_all_blocks=1 00:20:49.144 --rc geninfo_unexecuted_blocks=1 00:20:49.144 00:20:49.144 ' 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.144 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:49.145 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:49.145 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:49.145 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:49.145 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:49.145 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:49.145 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:49.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:49.145 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:49.145 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:49.145 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:49.145 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:49.145 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:49.145 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:49.145 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:49.145 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:49.145 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:49.145 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.145 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:49.145 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.407 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:49.407 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:49.407 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:49.407 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:57.544 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:57.544 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:57.544 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:57.545 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:57.545 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:57.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:57.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:20:57.545 00:20:57.545 --- 10.0.0.2 ping statistics --- 00:20:57.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.545 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:57.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:57.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:20:57.545 00:20:57.545 --- 10.0.0.1 ping statistics --- 00:20:57.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.545 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1067485 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1067485 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1067485 ']' 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.545 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.545 [2024-10-30 14:07:54.986785] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:20:57.545 [2024-10-30 14:07:54.986851] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.545 [2024-10-30 14:07:55.088427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.545 [2024-10-30 14:07:55.138470] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.545 [2024-10-30 14:07:55.138521] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.545 [2024-10-30 14:07:55.138530] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.545 [2024-10-30 14:07:55.138537] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.545 [2024-10-30 14:07:55.138544] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.545 [2024-10-30 14:07:55.139348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.545 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.545 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:57.545 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:57.545 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:57.545 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.807 Malloc0 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.807 [2024-10-30 14:07:55.981283] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.807 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.807 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.807 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:57.807 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.807 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.807 [2024-10-30 14:07:56.017582] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.807 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.807 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:58.068 [2024-10-30 14:07:56.123872] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:59.452 Initializing NVMe Controllers 00:20:59.452 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:59.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:59.452 Initialization complete. Launching workers. 00:20:59.452 ======================================================== 00:20:59.452 Latency(us) 00:20:59.453 Device Information : IOPS MiB/s Average min max 00:20:59.453 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32295.33 8014.32 63909.67 00:20:59.453 ======================================================== 00:20:59.453 Total : 129.00 16.12 32295.33 8014.32 63909.67 00:20:59.453 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:59.453 rmmod nvme_tcp 00:20:59.453 rmmod nvme_fabrics 00:20:59.453 rmmod nvme_keyring 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1067485 ']' 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1067485 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1067485 ']' 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1067485 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.453 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1067485 00:20:59.714 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:59.714 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:59.714 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1067485' 00:20:59.714 killing process with pid 1067485 00:20:59.714 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1067485 00:20:59.714 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1067485 00:20:59.714 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:59.714 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:59.714 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:59.714 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:59.714 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:59.714 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:59.714 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:59.714 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:59.714 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:59.714 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.714 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:59.714 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.253 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:02.253 00:21:02.253 real 0m12.787s 00:21:02.253 user 0m5.271s 00:21:02.253 sys 0m6.126s 00:21:02.253 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:02.253 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:02.253 ************************************ 00:21:02.253 END TEST nvmf_wait_for_buf 00:21:02.253 ************************************ 00:21:02.253 14:08:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:02.253 14:08:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:02.253 14:08:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:02.253 14:08:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:02.253 14:08:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:02.253 14:08:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:08.836 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:08.836 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:08.836 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:08.836 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:08.836 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:08.837 14:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:08.837 14:08:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:08.837 14:08:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:08.837 14:08:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:09.098 ************************************ 00:21:09.098 START TEST nvmf_perf_adq 00:21:09.098 ************************************ 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:09.098 * Looking for test storage... 00:21:09.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:09.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.098 --rc genhtml_branch_coverage=1 00:21:09.098 --rc genhtml_function_coverage=1 00:21:09.098 --rc genhtml_legend=1 00:21:09.098 --rc geninfo_all_blocks=1 00:21:09.098 --rc geninfo_unexecuted_blocks=1 00:21:09.098 00:21:09.098 ' 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:09.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.098 --rc genhtml_branch_coverage=1 00:21:09.098 --rc genhtml_function_coverage=1 00:21:09.098 --rc genhtml_legend=1 00:21:09.098 --rc geninfo_all_blocks=1 00:21:09.098 --rc geninfo_unexecuted_blocks=1 00:21:09.098 00:21:09.098 ' 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:09.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.098 --rc genhtml_branch_coverage=1 00:21:09.098 --rc genhtml_function_coverage=1 00:21:09.098 --rc genhtml_legend=1 00:21:09.098 --rc geninfo_all_blocks=1 00:21:09.098 --rc geninfo_unexecuted_blocks=1 00:21:09.098 00:21:09.098 ' 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:09.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.098 --rc genhtml_branch_coverage=1 00:21:09.098 --rc genhtml_function_coverage=1 00:21:09.098 --rc genhtml_legend=1 00:21:09.098 --rc geninfo_all_blocks=1 00:21:09.098 --rc geninfo_unexecuted_blocks=1 00:21:09.098 00:21:09.098 ' 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.098 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:09.358 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:09.358 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:09.358 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.358 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:09.358 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:09.358 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:09.358 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:09.358 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:09.358 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.358 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.358 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.358 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.358 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.358 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.358 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:09.358 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.358 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:09.358 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:09.358 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:09.358 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:09.358 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.358 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.358 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:09.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:09.358 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:09.359 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:09.359 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:09.359 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:09.359 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:09.359 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:17.499 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:17.499 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:17.499 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:17.499 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:17.499 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:18.070 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:20.617 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:26.045 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:26.046 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:26.046 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:26.046 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:26.046 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:26.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:26.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:21:26.046 00:21:26.046 --- 10.0.0.2 ping statistics --- 00:21:26.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.046 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:21:26.046 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:26.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:26.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:21:26.047 00:21:26.047 --- 10.0.0.1 ping statistics --- 00:21:26.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.047 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:21:26.047 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:26.047 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:26.047 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:26.047 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:26.047 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:26.047 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:26.047 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:26.047 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:26.047 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:26.047 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:26.047 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:26.047 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:26.047 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.047 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1077727 00:21:26.047 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1077727 00:21:26.047 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:26.047 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1077727 ']' 00:21:26.047 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.047 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:26.047 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.047 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:26.047 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.047 [2024-10-30 14:08:23.732135] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:21:26.047 [2024-10-30 14:08:23.732205] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.047 [2024-10-30 14:08:23.836293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:26.047 [2024-10-30 14:08:23.889999] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.047 [2024-10-30 14:08:23.890052] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.047 [2024-10-30 14:08:23.890061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.047 [2024-10-30 14:08:23.890072] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.047 [2024-10-30 14:08:23.890080] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.047 [2024-10-30 14:08:23.892205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.047 [2024-10-30 14:08:23.892277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.047 [2024-10-30 14:08:23.892436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.047 [2024-10-30 14:08:23.892436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:26.308 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:26.308 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:26.308 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:26.308 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:26.308 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.308 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.308 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:26.308 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:26.308 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:26.308 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.308 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.570 [2024-10-30 14:08:24.757043] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.570 Malloc1 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.570 [2024-10-30 14:08:24.839064] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1077953 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:26.570 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:29.115 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:29.116 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.116 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:29.116 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.116 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:29.116 "tick_rate": 2400000000, 00:21:29.116 "poll_groups": [ 00:21:29.116 { 00:21:29.116 "name": "nvmf_tgt_poll_group_000", 00:21:29.116 "admin_qpairs": 1, 00:21:29.116 "io_qpairs": 1, 00:21:29.116 "current_admin_qpairs": 1, 00:21:29.116 "current_io_qpairs": 1, 00:21:29.116 "pending_bdev_io": 0, 00:21:29.116 "completed_nvme_io": 23437, 00:21:29.116 "transports": [ 00:21:29.116 { 00:21:29.116 "trtype": "TCP" 00:21:29.116 } 00:21:29.116 ] 00:21:29.116 }, 00:21:29.116 { 00:21:29.116 "name": "nvmf_tgt_poll_group_001", 00:21:29.116 "admin_qpairs": 0, 00:21:29.116 "io_qpairs": 1, 00:21:29.116 "current_admin_qpairs": 0, 00:21:29.116 "current_io_qpairs": 1, 00:21:29.116 "pending_bdev_io": 0, 00:21:29.116 "completed_nvme_io": 18525, 00:21:29.116 "transports": [ 00:21:29.116 { 00:21:29.116 "trtype": "TCP" 00:21:29.116 } 00:21:29.116 ] 00:21:29.116 }, 00:21:29.116 { 00:21:29.116 "name": "nvmf_tgt_poll_group_002", 00:21:29.116 "admin_qpairs": 0, 00:21:29.116 "io_qpairs": 1, 00:21:29.116 "current_admin_qpairs": 0, 00:21:29.116 "current_io_qpairs": 1, 00:21:29.116 "pending_bdev_io": 0, 00:21:29.116 "completed_nvme_io": 19497, 00:21:29.116 "transports": [ 00:21:29.116 { 00:21:29.116 "trtype": "TCP" 00:21:29.116 } 00:21:29.116 ] 00:21:29.116 }, 00:21:29.116 { 00:21:29.116 "name": "nvmf_tgt_poll_group_003", 00:21:29.116 "admin_qpairs": 0, 00:21:29.116 "io_qpairs": 1, 00:21:29.116 "current_admin_qpairs": 0, 00:21:29.116 "current_io_qpairs": 1, 00:21:29.116 "pending_bdev_io": 0, 00:21:29.116 "completed_nvme_io": 18138, 00:21:29.116 "transports": [ 00:21:29.116 { 00:21:29.116 "trtype": "TCP" 00:21:29.116 } 00:21:29.116 ] 00:21:29.116 } 00:21:29.116 ] 00:21:29.116 }' 00:21:29.116 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:29.116 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:29.116 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:29.116 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:29.116 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1077953 00:21:37.253 Initializing NVMe Controllers 00:21:37.253 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:37.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:37.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:37.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:37.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:37.253 Initialization complete. Launching workers. 00:21:37.253 ======================================================== 00:21:37.253 Latency(us) 00:21:37.253 Device Information : IOPS MiB/s Average min max 00:21:37.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11982.70 46.81 5342.07 1239.51 11871.88 00:21:37.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13189.50 51.52 4852.48 1287.80 12382.61 00:21:37.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13459.10 52.57 4768.48 1162.58 45599.57 00:21:37.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13274.50 51.85 4821.64 1295.55 12798.64 00:21:37.253 ======================================================== 00:21:37.253 Total : 51905.79 202.76 4935.84 1162.58 45599.57 00:21:37.253 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:37.253 rmmod nvme_tcp 00:21:37.253 rmmod nvme_fabrics 00:21:37.253 rmmod nvme_keyring 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1077727 ']' 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1077727 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1077727 ']' 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1077727 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1077727 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1077727' 00:21:37.253 killing process with pid 1077727 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1077727 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1077727 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:37.253 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.166 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:39.166 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:39.166 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:39.166 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:40.552 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:43.132 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:48.426 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:48.427 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:48.427 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:48.427 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:48.427 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:48.427 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:48.427 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:48.427 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:48.427 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:48.427 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:48.427 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:48.427 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:48.427 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:48.427 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:48.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.699 ms 00:21:48.427 00:21:48.427 --- 10.0.0.2 ping statistics --- 00:21:48.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.428 rtt min/avg/max/mdev = 0.699/0.699/0.699/0.000 ms 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:48.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:21:48.428 00:21:48.428 --- 10.0.0.1 ping statistics --- 00:21:48.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.428 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:48.428 net.core.busy_poll = 1 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:48.428 net.core.busy_read = 1 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1082549 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1082549 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1082549 ']' 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.428 14:08:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.428 [2024-10-30 14:08:46.595048] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:21:48.428 [2024-10-30 14:08:46.595122] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.428 [2024-10-30 14:08:46.697598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:48.689 [2024-10-30 14:08:46.750372] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.689 [2024-10-30 14:08:46.750429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.689 [2024-10-30 14:08:46.750438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.689 [2024-10-30 14:08:46.750445] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.689 [2024-10-30 14:08:46.750451] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.689 [2024-10-30 14:08:46.752848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.689 [2024-10-30 14:08:46.753093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:48.689 [2024-10-30 14:08:46.752929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.689 [2024-10-30 14:08:46.753095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.261 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.261 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:49.261 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:49.261 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:49.261 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.261 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.261 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:49.261 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:49.261 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:49.261 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.261 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.261 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.261 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:49.261 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:49.261 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.261 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.261 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.261 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:49.261 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.261 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.522 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.522 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:49.522 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.522 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.522 [2024-10-30 14:08:47.613988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.522 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.522 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:49.522 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.522 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.522 Malloc1 00:21:49.522 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.522 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:49.522 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.522 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.522 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.522 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:49.522 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.522 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.522 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.522 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:49.522 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.522 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.522 [2024-10-30 14:08:47.691975] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.522 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.522 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1082741 00:21:49.522 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:49.522 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:51.443 14:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:51.443 14:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.443 14:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:51.443 14:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.443 14:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:51.443 "tick_rate": 2400000000, 00:21:51.443 "poll_groups": [ 00:21:51.443 { 00:21:51.443 "name": "nvmf_tgt_poll_group_000", 00:21:51.443 "admin_qpairs": 1, 00:21:51.443 "io_qpairs": 2, 00:21:51.443 "current_admin_qpairs": 1, 00:21:51.443 "current_io_qpairs": 2, 00:21:51.443 "pending_bdev_io": 0, 00:21:51.443 "completed_nvme_io": 23519, 00:21:51.443 "transports": [ 00:21:51.443 { 00:21:51.443 "trtype": "TCP" 00:21:51.443 } 00:21:51.443 ] 00:21:51.443 }, 00:21:51.443 { 00:21:51.443 "name": "nvmf_tgt_poll_group_001", 00:21:51.443 "admin_qpairs": 0, 00:21:51.443 "io_qpairs": 2, 00:21:51.443 "current_admin_qpairs": 0, 00:21:51.443 "current_io_qpairs": 2, 00:21:51.443 "pending_bdev_io": 0, 00:21:51.443 "completed_nvme_io": 25528, 00:21:51.443 "transports": [ 00:21:51.443 { 00:21:51.443 "trtype": "TCP" 00:21:51.443 } 00:21:51.443 ] 00:21:51.443 }, 00:21:51.443 { 00:21:51.443 "name": "nvmf_tgt_poll_group_002", 00:21:51.443 "admin_qpairs": 0, 00:21:51.443 "io_qpairs": 0, 00:21:51.443 "current_admin_qpairs": 0, 00:21:51.443 "current_io_qpairs": 0, 00:21:51.443 "pending_bdev_io": 0, 00:21:51.443 "completed_nvme_io": 0, 00:21:51.443 "transports": [ 00:21:51.443 { 00:21:51.443 "trtype": "TCP" 00:21:51.443 } 00:21:51.443 ] 00:21:51.443 }, 00:21:51.443 { 00:21:51.443 "name": "nvmf_tgt_poll_group_003", 00:21:51.443 "admin_qpairs": 0, 00:21:51.443 "io_qpairs": 0, 00:21:51.443 "current_admin_qpairs": 0, 00:21:51.443 "current_io_qpairs": 0, 00:21:51.443 "pending_bdev_io": 0, 00:21:51.444 "completed_nvme_io": 0, 00:21:51.444 "transports": [ 00:21:51.444 { 00:21:51.444 "trtype": "TCP" 00:21:51.444 } 00:21:51.444 ] 00:21:51.444 } 00:21:51.444 ] 00:21:51.444 }' 00:21:51.444 14:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:51.444 14:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:51.706 14:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:51.706 14:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:51.706 14:08:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1082741 00:21:59.838 Initializing NVMe Controllers 00:21:59.838 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:59.838 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:59.838 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:59.838 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:59.838 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:59.838 Initialization complete. Launching workers. 00:21:59.838 ======================================================== 00:21:59.839 Latency(us) 00:21:59.839 Device Information : IOPS MiB/s Average min max 00:21:59.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9914.80 38.73 6477.78 1064.49 53866.46 00:21:59.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7662.50 29.93 8351.13 1000.95 56768.55 00:21:59.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9127.30 35.65 7032.45 1245.84 54336.49 00:21:59.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9583.30 37.43 6682.46 1032.82 54492.00 00:21:59.839 ======================================================== 00:21:59.839 Total : 36287.89 141.75 7066.92 1000.95 56768.55 00:21:59.839 00:21:59.839 14:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:59.839 14:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:59.839 14:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:59.839 14:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:59.839 14:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:59.839 14:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:59.839 14:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:59.839 rmmod nvme_tcp 00:21:59.839 rmmod nvme_fabrics 00:21:59.839 rmmod nvme_keyring 00:21:59.839 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:59.839 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:59.839 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:59.839 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1082549 ']' 00:21:59.839 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1082549 00:21:59.839 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1082549 ']' 00:21:59.839 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1082549 00:21:59.839 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:59.839 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.839 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1082549 00:21:59.839 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:59.839 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:59.839 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1082549' 00:21:59.839 killing process with pid 1082549 00:21:59.839 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1082549 00:21:59.839 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1082549 00:22:00.099 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:00.099 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:00.099 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:00.099 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:00.099 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:00.099 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:00.099 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:00.099 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:00.099 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:00.099 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.099 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.099 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:03.402 00:22:03.402 real 0m54.128s 00:22:03.402 user 2m50.679s 00:22:03.402 sys 0m11.436s 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.402 ************************************ 00:22:03.402 END TEST nvmf_perf_adq 00:22:03.402 ************************************ 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:03.402 ************************************ 00:22:03.402 START TEST nvmf_shutdown 00:22:03.402 ************************************ 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:03.402 * Looking for test storage... 00:22:03.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:03.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.402 --rc genhtml_branch_coverage=1 00:22:03.402 --rc genhtml_function_coverage=1 00:22:03.402 --rc genhtml_legend=1 00:22:03.402 --rc geninfo_all_blocks=1 00:22:03.402 --rc geninfo_unexecuted_blocks=1 00:22:03.402 00:22:03.402 ' 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:03.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.402 --rc genhtml_branch_coverage=1 00:22:03.402 --rc genhtml_function_coverage=1 00:22:03.402 --rc genhtml_legend=1 00:22:03.402 --rc geninfo_all_blocks=1 00:22:03.402 --rc geninfo_unexecuted_blocks=1 00:22:03.402 00:22:03.402 ' 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:03.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.402 --rc genhtml_branch_coverage=1 00:22:03.402 --rc genhtml_function_coverage=1 00:22:03.402 --rc genhtml_legend=1 00:22:03.402 --rc geninfo_all_blocks=1 00:22:03.402 --rc geninfo_unexecuted_blocks=1 00:22:03.402 00:22:03.402 ' 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:03.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.402 --rc genhtml_branch_coverage=1 00:22:03.402 --rc genhtml_function_coverage=1 00:22:03.402 --rc genhtml_legend=1 00:22:03.402 --rc geninfo_all_blocks=1 00:22:03.402 --rc geninfo_unexecuted_blocks=1 00:22:03.402 00:22:03.402 ' 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:03.402 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:03.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:03.403 ************************************ 00:22:03.403 START TEST nvmf_shutdown_tc1 00:22:03.403 ************************************ 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:03.403 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:11.549 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:11.549 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:11.549 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:11.549 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.549 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:11.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:22:11.549 00:22:11.549 --- 10.0.0.2 ping statistics --- 00:22:11.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.549 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:22:11.549 00:22:11.549 --- 10.0.0.1 ping statistics --- 00:22:11.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.549 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1089564 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1089564 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1089564 ']' 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:11.549 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:11.549 [2024-10-30 14:09:09.320209] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:22:11.549 [2024-10-30 14:09:09.320282] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.549 [2024-10-30 14:09:09.422181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:11.549 [2024-10-30 14:09:09.474626] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.549 [2024-10-30 14:09:09.474681] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.549 [2024-10-30 14:09:09.474689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.549 [2024-10-30 14:09:09.474696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.549 [2024-10-30 14:09:09.474702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.549 [2024-10-30 14:09:09.477108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.549 [2024-10-30 14:09:09.477271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:11.549 [2024-10-30 14:09:09.477431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.549 [2024-10-30 14:09:09.477432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:12.122 [2024-10-30 14:09:10.201961] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.122 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:12.122 Malloc1 00:22:12.122 [2024-10-30 14:09:10.327482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.122 Malloc2 00:22:12.122 Malloc3 00:22:12.384 Malloc4 00:22:12.384 Malloc5 00:22:12.384 Malloc6 00:22:12.384 Malloc7 00:22:12.384 Malloc8 00:22:12.384 Malloc9 00:22:12.647 Malloc10 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1090043 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1090043 /var/tmp/bdevperf.sock 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1090043 ']' 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.647 { 00:22:12.647 "params": { 00:22:12.647 "name": "Nvme$subsystem", 00:22:12.647 "trtype": "$TEST_TRANSPORT", 00:22:12.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.647 "adrfam": "ipv4", 00:22:12.647 "trsvcid": "$NVMF_PORT", 00:22:12.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.647 "hdgst": ${hdgst:-false}, 00:22:12.647 "ddgst": ${ddgst:-false} 00:22:12.647 }, 00:22:12.647 "method": "bdev_nvme_attach_controller" 00:22:12.647 } 00:22:12.647 EOF 00:22:12.647 )") 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.647 { 00:22:12.647 "params": { 00:22:12.647 "name": "Nvme$subsystem", 00:22:12.647 "trtype": "$TEST_TRANSPORT", 00:22:12.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.647 "adrfam": "ipv4", 00:22:12.647 "trsvcid": "$NVMF_PORT", 00:22:12.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.647 "hdgst": ${hdgst:-false}, 00:22:12.647 "ddgst": ${ddgst:-false} 00:22:12.647 }, 00:22:12.647 "method": "bdev_nvme_attach_controller" 00:22:12.647 } 00:22:12.647 EOF 00:22:12.647 )") 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.647 { 00:22:12.647 "params": { 00:22:12.647 "name": "Nvme$subsystem", 00:22:12.647 "trtype": "$TEST_TRANSPORT", 00:22:12.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.647 "adrfam": "ipv4", 00:22:12.647 "trsvcid": "$NVMF_PORT", 00:22:12.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.647 "hdgst": ${hdgst:-false}, 00:22:12.647 "ddgst": ${ddgst:-false} 00:22:12.647 }, 00:22:12.647 "method": "bdev_nvme_attach_controller" 00:22:12.647 } 00:22:12.647 EOF 00:22:12.647 )") 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.647 { 00:22:12.647 "params": { 00:22:12.647 "name": "Nvme$subsystem", 00:22:12.647 "trtype": "$TEST_TRANSPORT", 00:22:12.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.647 "adrfam": "ipv4", 00:22:12.647 "trsvcid": "$NVMF_PORT", 00:22:12.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.647 "hdgst": ${hdgst:-false}, 00:22:12.647 "ddgst": ${ddgst:-false} 00:22:12.647 }, 00:22:12.647 "method": "bdev_nvme_attach_controller" 00:22:12.647 } 00:22:12.647 EOF 00:22:12.647 )") 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.647 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.647 { 00:22:12.647 "params": { 00:22:12.647 "name": "Nvme$subsystem", 00:22:12.647 "trtype": "$TEST_TRANSPORT", 00:22:12.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.647 "adrfam": "ipv4", 00:22:12.647 "trsvcid": "$NVMF_PORT", 00:22:12.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.648 "hdgst": ${hdgst:-false}, 00:22:12.648 "ddgst": ${ddgst:-false} 00:22:12.648 }, 00:22:12.648 "method": "bdev_nvme_attach_controller" 00:22:12.648 } 00:22:12.648 EOF 00:22:12.648 )") 00:22:12.648 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:12.648 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.648 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.648 { 00:22:12.648 "params": { 00:22:12.648 "name": "Nvme$subsystem", 00:22:12.648 "trtype": "$TEST_TRANSPORT", 00:22:12.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.648 "adrfam": "ipv4", 00:22:12.648 "trsvcid": "$NVMF_PORT", 00:22:12.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.648 "hdgst": ${hdgst:-false}, 00:22:12.648 "ddgst": ${ddgst:-false} 00:22:12.648 }, 00:22:12.648 "method": "bdev_nvme_attach_controller" 00:22:12.648 } 00:22:12.648 EOF 00:22:12.648 )") 00:22:12.648 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:12.648 [2024-10-30 14:09:10.848070] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:22:12.648 [2024-10-30 14:09:10.848142] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:12.648 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.648 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.648 { 00:22:12.648 "params": { 00:22:12.648 "name": "Nvme$subsystem", 00:22:12.648 "trtype": "$TEST_TRANSPORT", 00:22:12.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.648 "adrfam": "ipv4", 00:22:12.648 "trsvcid": "$NVMF_PORT", 00:22:12.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.648 "hdgst": ${hdgst:-false}, 00:22:12.648 "ddgst": ${ddgst:-false} 00:22:12.648 }, 00:22:12.648 "method": "bdev_nvme_attach_controller" 00:22:12.648 } 00:22:12.648 EOF 00:22:12.648 )") 00:22:12.648 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:12.648 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.648 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.648 { 00:22:12.648 "params": { 00:22:12.648 "name": "Nvme$subsystem", 00:22:12.648 "trtype": "$TEST_TRANSPORT", 00:22:12.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.648 "adrfam": "ipv4", 00:22:12.648 "trsvcid": "$NVMF_PORT", 00:22:12.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.648 "hdgst": ${hdgst:-false}, 00:22:12.648 "ddgst": ${ddgst:-false} 00:22:12.648 }, 00:22:12.648 "method": "bdev_nvme_attach_controller" 00:22:12.648 } 00:22:12.648 EOF 00:22:12.648 )") 00:22:12.648 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:12.648 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.648 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.648 { 00:22:12.648 "params": { 00:22:12.648 "name": "Nvme$subsystem", 00:22:12.648 "trtype": "$TEST_TRANSPORT", 00:22:12.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.648 "adrfam": "ipv4", 00:22:12.648 "trsvcid": "$NVMF_PORT", 00:22:12.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.648 "hdgst": ${hdgst:-false}, 00:22:12.648 "ddgst": ${ddgst:-false} 00:22:12.648 }, 00:22:12.648 "method": "bdev_nvme_attach_controller" 00:22:12.648 } 00:22:12.648 EOF 00:22:12.648 )") 00:22:12.648 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:12.648 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.648 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.648 { 00:22:12.648 "params": { 00:22:12.648 "name": "Nvme$subsystem", 00:22:12.648 "trtype": "$TEST_TRANSPORT", 00:22:12.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.648 "adrfam": "ipv4", 00:22:12.648 "trsvcid": "$NVMF_PORT", 00:22:12.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.648 "hdgst": ${hdgst:-false}, 00:22:12.648 "ddgst": ${ddgst:-false} 00:22:12.648 }, 00:22:12.648 "method": "bdev_nvme_attach_controller" 00:22:12.648 } 00:22:12.648 EOF 00:22:12.648 )") 00:22:12.648 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:12.648 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:12.648 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:12.648 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:12.648 "params": { 00:22:12.648 "name": "Nvme1", 00:22:12.648 "trtype": "tcp", 00:22:12.648 "traddr": "10.0.0.2", 00:22:12.648 "adrfam": "ipv4", 00:22:12.648 "trsvcid": "4420", 00:22:12.648 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.648 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:12.648 "hdgst": false, 00:22:12.648 "ddgst": false 00:22:12.648 }, 00:22:12.648 "method": "bdev_nvme_attach_controller" 00:22:12.648 },{ 00:22:12.648 "params": { 00:22:12.648 "name": "Nvme2", 00:22:12.648 "trtype": "tcp", 00:22:12.648 "traddr": "10.0.0.2", 00:22:12.648 "adrfam": "ipv4", 00:22:12.648 "trsvcid": "4420", 00:22:12.648 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:12.648 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:12.648 "hdgst": false, 00:22:12.648 "ddgst": false 00:22:12.648 }, 00:22:12.648 "method": "bdev_nvme_attach_controller" 00:22:12.648 },{ 00:22:12.648 "params": { 00:22:12.648 "name": "Nvme3", 00:22:12.648 "trtype": "tcp", 00:22:12.648 "traddr": "10.0.0.2", 00:22:12.648 "adrfam": "ipv4", 00:22:12.648 "trsvcid": "4420", 00:22:12.648 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:12.648 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:12.648 "hdgst": false, 00:22:12.648 "ddgst": false 00:22:12.648 }, 00:22:12.648 "method": "bdev_nvme_attach_controller" 00:22:12.648 },{ 00:22:12.648 "params": { 00:22:12.648 "name": "Nvme4", 00:22:12.648 "trtype": "tcp", 00:22:12.648 "traddr": "10.0.0.2", 00:22:12.648 "adrfam": "ipv4", 00:22:12.648 "trsvcid": "4420", 00:22:12.648 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:12.648 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:12.648 "hdgst": false, 00:22:12.648 "ddgst": false 00:22:12.648 }, 00:22:12.648 "method": "bdev_nvme_attach_controller" 00:22:12.648 },{ 00:22:12.648 "params": { 00:22:12.648 "name": "Nvme5", 00:22:12.649 "trtype": "tcp", 00:22:12.649 "traddr": "10.0.0.2", 00:22:12.649 "adrfam": "ipv4", 00:22:12.649 "trsvcid": "4420", 00:22:12.649 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:12.649 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:12.649 "hdgst": false, 00:22:12.649 "ddgst": false 00:22:12.649 }, 00:22:12.649 "method": "bdev_nvme_attach_controller" 00:22:12.649 },{ 00:22:12.649 "params": { 00:22:12.649 "name": "Nvme6", 00:22:12.649 "trtype": "tcp", 00:22:12.649 "traddr": "10.0.0.2", 00:22:12.649 "adrfam": "ipv4", 00:22:12.649 "trsvcid": "4420", 00:22:12.649 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:12.649 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:12.649 "hdgst": false, 00:22:12.649 "ddgst": false 00:22:12.649 }, 00:22:12.649 "method": "bdev_nvme_attach_controller" 00:22:12.649 },{ 00:22:12.649 "params": { 00:22:12.649 "name": "Nvme7", 00:22:12.649 "trtype": "tcp", 00:22:12.649 "traddr": "10.0.0.2", 00:22:12.649 "adrfam": "ipv4", 00:22:12.649 "trsvcid": "4420", 00:22:12.649 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:12.649 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:12.649 "hdgst": false, 00:22:12.649 "ddgst": false 00:22:12.649 }, 00:22:12.649 "method": "bdev_nvme_attach_controller" 00:22:12.649 },{ 00:22:12.649 "params": { 00:22:12.649 "name": "Nvme8", 00:22:12.649 "trtype": "tcp", 00:22:12.649 "traddr": "10.0.0.2", 00:22:12.649 "adrfam": "ipv4", 00:22:12.649 "trsvcid": "4420", 00:22:12.649 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:12.649 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:12.649 "hdgst": false, 00:22:12.649 "ddgst": false 00:22:12.649 }, 00:22:12.649 "method": "bdev_nvme_attach_controller" 00:22:12.649 },{ 00:22:12.649 "params": { 00:22:12.649 "name": "Nvme9", 00:22:12.649 "trtype": "tcp", 00:22:12.649 "traddr": "10.0.0.2", 00:22:12.649 "adrfam": "ipv4", 00:22:12.649 "trsvcid": "4420", 00:22:12.649 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:12.649 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:12.649 "hdgst": false, 00:22:12.649 "ddgst": false 00:22:12.649 }, 00:22:12.649 "method": "bdev_nvme_attach_controller" 00:22:12.649 },{ 00:22:12.649 "params": { 00:22:12.649 "name": "Nvme10", 00:22:12.649 "trtype": "tcp", 00:22:12.649 "traddr": "10.0.0.2", 00:22:12.649 "adrfam": "ipv4", 00:22:12.649 "trsvcid": "4420", 00:22:12.649 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:12.649 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:12.649 "hdgst": false, 00:22:12.649 "ddgst": false 00:22:12.649 }, 00:22:12.649 "method": "bdev_nvme_attach_controller" 00:22:12.649 }' 00:22:12.649 [2024-10-30 14:09:10.943553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.910 [2024-10-30 14:09:10.997156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.293 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.293 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:14.293 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:14.293 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.293 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:14.293 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.293 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1090043 00:22:14.293 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:14.293 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:15.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1090043 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:15.236 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1089564 00:22:15.236 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:15.236 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:15.236 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:15.236 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:15.236 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:15.236 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:15.236 { 00:22:15.236 "params": { 00:22:15.236 "name": "Nvme$subsystem", 00:22:15.236 "trtype": "$TEST_TRANSPORT", 00:22:15.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.236 "adrfam": "ipv4", 00:22:15.236 "trsvcid": "$NVMF_PORT", 00:22:15.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.236 "hdgst": ${hdgst:-false}, 00:22:15.236 "ddgst": ${ddgst:-false} 00:22:15.236 }, 00:22:15.236 "method": "bdev_nvme_attach_controller" 00:22:15.236 } 00:22:15.236 EOF 00:22:15.236 )") 00:22:15.236 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:15.236 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:15.236 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:15.236 { 00:22:15.236 "params": { 00:22:15.236 "name": "Nvme$subsystem", 00:22:15.236 "trtype": "$TEST_TRANSPORT", 00:22:15.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.236 "adrfam": "ipv4", 00:22:15.236 "trsvcid": "$NVMF_PORT", 00:22:15.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.236 "hdgst": ${hdgst:-false}, 00:22:15.236 "ddgst": ${ddgst:-false} 00:22:15.236 }, 00:22:15.236 "method": "bdev_nvme_attach_controller" 00:22:15.236 } 00:22:15.236 EOF 00:22:15.236 )") 00:22:15.236 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:15.236 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:15.236 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:15.236 { 00:22:15.236 "params": { 00:22:15.236 "name": "Nvme$subsystem", 00:22:15.236 "trtype": "$TEST_TRANSPORT", 00:22:15.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.236 "adrfam": "ipv4", 00:22:15.236 "trsvcid": "$NVMF_PORT", 00:22:15.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.236 "hdgst": ${hdgst:-false}, 00:22:15.236 "ddgst": ${ddgst:-false} 00:22:15.236 }, 00:22:15.237 "method": "bdev_nvme_attach_controller" 00:22:15.237 } 00:22:15.237 EOF 00:22:15.237 )") 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:15.237 { 00:22:15.237 "params": { 00:22:15.237 "name": "Nvme$subsystem", 00:22:15.237 "trtype": "$TEST_TRANSPORT", 00:22:15.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.237 "adrfam": "ipv4", 00:22:15.237 "trsvcid": "$NVMF_PORT", 00:22:15.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.237 "hdgst": ${hdgst:-false}, 00:22:15.237 "ddgst": ${ddgst:-false} 00:22:15.237 }, 00:22:15.237 "method": "bdev_nvme_attach_controller" 00:22:15.237 } 00:22:15.237 EOF 00:22:15.237 )") 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:15.237 { 00:22:15.237 "params": { 00:22:15.237 "name": "Nvme$subsystem", 00:22:15.237 "trtype": "$TEST_TRANSPORT", 00:22:15.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.237 "adrfam": "ipv4", 00:22:15.237 "trsvcid": "$NVMF_PORT", 00:22:15.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.237 "hdgst": ${hdgst:-false}, 00:22:15.237 "ddgst": ${ddgst:-false} 00:22:15.237 }, 00:22:15.237 "method": "bdev_nvme_attach_controller" 00:22:15.237 } 00:22:15.237 EOF 00:22:15.237 )") 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:15.237 { 00:22:15.237 "params": { 00:22:15.237 "name": "Nvme$subsystem", 00:22:15.237 "trtype": "$TEST_TRANSPORT", 00:22:15.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.237 "adrfam": "ipv4", 00:22:15.237 "trsvcid": "$NVMF_PORT", 00:22:15.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.237 "hdgst": ${hdgst:-false}, 00:22:15.237 "ddgst": ${ddgst:-false} 00:22:15.237 }, 00:22:15.237 "method": "bdev_nvme_attach_controller" 00:22:15.237 } 00:22:15.237 EOF 00:22:15.237 )") 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:15.237 [2024-10-30 14:09:13.307905] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:22:15.237 [2024-10-30 14:09:13.307960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090692 ] 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:15.237 { 00:22:15.237 "params": { 00:22:15.237 "name": "Nvme$subsystem", 00:22:15.237 "trtype": "$TEST_TRANSPORT", 00:22:15.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.237 "adrfam": "ipv4", 00:22:15.237 "trsvcid": "$NVMF_PORT", 00:22:15.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.237 "hdgst": ${hdgst:-false}, 00:22:15.237 "ddgst": ${ddgst:-false} 00:22:15.237 }, 00:22:15.237 "method": "bdev_nvme_attach_controller" 00:22:15.237 } 00:22:15.237 EOF 00:22:15.237 )") 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:15.237 { 00:22:15.237 "params": { 00:22:15.237 "name": "Nvme$subsystem", 00:22:15.237 "trtype": "$TEST_TRANSPORT", 00:22:15.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.237 "adrfam": "ipv4", 00:22:15.237 "trsvcid": "$NVMF_PORT", 00:22:15.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.237 "hdgst": ${hdgst:-false}, 00:22:15.237 "ddgst": ${ddgst:-false} 00:22:15.237 }, 00:22:15.237 "method": "bdev_nvme_attach_controller" 00:22:15.237 } 00:22:15.237 EOF 00:22:15.237 )") 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:15.237 { 00:22:15.237 "params": { 00:22:15.237 "name": "Nvme$subsystem", 00:22:15.237 "trtype": "$TEST_TRANSPORT", 00:22:15.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.237 "adrfam": "ipv4", 00:22:15.237 "trsvcid": "$NVMF_PORT", 00:22:15.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.237 "hdgst": ${hdgst:-false}, 00:22:15.237 "ddgst": ${ddgst:-false} 00:22:15.237 }, 00:22:15.237 "method": "bdev_nvme_attach_controller" 00:22:15.237 } 00:22:15.237 EOF 00:22:15.237 )") 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:15.237 { 00:22:15.237 "params": { 00:22:15.237 "name": "Nvme$subsystem", 00:22:15.237 "trtype": "$TEST_TRANSPORT", 00:22:15.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.237 "adrfam": "ipv4", 00:22:15.237 "trsvcid": "$NVMF_PORT", 00:22:15.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.237 "hdgst": ${hdgst:-false}, 00:22:15.237 "ddgst": ${ddgst:-false} 00:22:15.237 }, 00:22:15.237 "method": "bdev_nvme_attach_controller" 00:22:15.237 } 00:22:15.237 EOF 00:22:15.237 )") 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:15.237 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:15.237 "params": { 00:22:15.237 "name": "Nvme1", 00:22:15.237 "trtype": "tcp", 00:22:15.237 "traddr": "10.0.0.2", 00:22:15.237 "adrfam": "ipv4", 00:22:15.237 "trsvcid": "4420", 00:22:15.237 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.237 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:15.237 "hdgst": false, 00:22:15.237 "ddgst": false 00:22:15.237 }, 00:22:15.237 "method": "bdev_nvme_attach_controller" 00:22:15.237 },{ 00:22:15.237 "params": { 00:22:15.237 "name": "Nvme2", 00:22:15.237 "trtype": "tcp", 00:22:15.237 "traddr": "10.0.0.2", 00:22:15.237 "adrfam": "ipv4", 00:22:15.237 "trsvcid": "4420", 00:22:15.237 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:15.237 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:15.237 "hdgst": false, 00:22:15.237 "ddgst": false 00:22:15.237 }, 00:22:15.237 "method": "bdev_nvme_attach_controller" 00:22:15.237 },{ 00:22:15.237 "params": { 00:22:15.237 "name": "Nvme3", 00:22:15.237 "trtype": "tcp", 00:22:15.237 "traddr": "10.0.0.2", 00:22:15.237 "adrfam": "ipv4", 00:22:15.237 "trsvcid": "4420", 00:22:15.237 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:15.237 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:15.237 "hdgst": false, 00:22:15.237 "ddgst": false 00:22:15.237 }, 00:22:15.237 "method": "bdev_nvme_attach_controller" 00:22:15.237 },{ 00:22:15.237 "params": { 00:22:15.237 "name": "Nvme4", 00:22:15.237 "trtype": "tcp", 00:22:15.237 "traddr": "10.0.0.2", 00:22:15.237 "adrfam": "ipv4", 00:22:15.237 "trsvcid": "4420", 00:22:15.237 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:15.237 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:15.237 "hdgst": false, 00:22:15.237 "ddgst": false 00:22:15.237 }, 00:22:15.237 "method": "bdev_nvme_attach_controller" 00:22:15.237 },{ 00:22:15.237 "params": { 00:22:15.237 "name": "Nvme5", 00:22:15.237 "trtype": "tcp", 00:22:15.237 "traddr": "10.0.0.2", 00:22:15.237 "adrfam": "ipv4", 00:22:15.237 "trsvcid": "4420", 00:22:15.237 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:15.237 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:15.237 "hdgst": false, 00:22:15.237 "ddgst": false 00:22:15.237 }, 00:22:15.237 "method": "bdev_nvme_attach_controller" 00:22:15.237 },{ 00:22:15.237 "params": { 00:22:15.237 "name": "Nvme6", 00:22:15.237 "trtype": "tcp", 00:22:15.237 "traddr": "10.0.0.2", 00:22:15.237 "adrfam": "ipv4", 00:22:15.237 "trsvcid": "4420", 00:22:15.237 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:15.237 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:15.237 "hdgst": false, 00:22:15.237 "ddgst": false 00:22:15.237 }, 00:22:15.237 "method": "bdev_nvme_attach_controller" 00:22:15.237 },{ 00:22:15.237 "params": { 00:22:15.237 "name": "Nvme7", 00:22:15.237 "trtype": "tcp", 00:22:15.237 "traddr": "10.0.0.2", 00:22:15.237 "adrfam": "ipv4", 00:22:15.237 "trsvcid": "4420", 00:22:15.237 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:15.237 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:15.238 "hdgst": false, 00:22:15.238 "ddgst": false 00:22:15.238 }, 00:22:15.238 "method": "bdev_nvme_attach_controller" 00:22:15.238 },{ 00:22:15.238 "params": { 00:22:15.238 "name": "Nvme8", 00:22:15.238 "trtype": "tcp", 00:22:15.238 "traddr": "10.0.0.2", 00:22:15.238 "adrfam": "ipv4", 00:22:15.238 "trsvcid": "4420", 00:22:15.238 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:15.238 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:15.238 "hdgst": false, 00:22:15.238 "ddgst": false 00:22:15.238 }, 00:22:15.238 "method": "bdev_nvme_attach_controller" 00:22:15.238 },{ 00:22:15.238 "params": { 00:22:15.238 "name": "Nvme9", 00:22:15.238 "trtype": "tcp", 00:22:15.238 "traddr": "10.0.0.2", 00:22:15.238 "adrfam": "ipv4", 00:22:15.238 "trsvcid": "4420", 00:22:15.238 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:15.238 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:15.238 "hdgst": false, 00:22:15.238 "ddgst": false 00:22:15.238 }, 00:22:15.238 "method": "bdev_nvme_attach_controller" 00:22:15.238 },{ 00:22:15.238 "params": { 00:22:15.238 "name": "Nvme10", 00:22:15.238 "trtype": "tcp", 00:22:15.238 "traddr": "10.0.0.2", 00:22:15.238 "adrfam": "ipv4", 00:22:15.238 "trsvcid": "4420", 00:22:15.238 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:15.238 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:15.238 "hdgst": false, 00:22:15.238 "ddgst": false 00:22:15.238 }, 00:22:15.238 "method": "bdev_nvme_attach_controller" 00:22:15.238 }' 00:22:15.238 [2024-10-30 14:09:13.398914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.238 [2024-10-30 14:09:13.435366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.619 Running I/O for 1 seconds... 00:22:17.819 1861.00 IOPS, 116.31 MiB/s 00:22:17.819 Latency(us) 00:22:17.819 [2024-10-30T13:09:16.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.819 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.819 Verification LBA range: start 0x0 length 0x400 00:22:17.819 Nvme1n1 : 1.09 235.72 14.73 0.00 0.00 268373.76 16930.13 242920.11 00:22:17.819 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.819 Verification LBA range: start 0x0 length 0x400 00:22:17.819 Nvme2n1 : 1.13 226.04 14.13 0.00 0.00 274689.71 33860.27 251658.24 00:22:17.819 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.819 Verification LBA range: start 0x0 length 0x400 00:22:17.819 Nvme3n1 : 1.09 235.21 14.70 0.00 0.00 259479.68 15837.87 260396.37 00:22:17.819 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.819 Verification LBA range: start 0x0 length 0x400 00:22:17.819 Nvme4n1 : 1.10 233.12 14.57 0.00 0.00 257098.88 21080.75 246415.36 00:22:17.819 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.819 Verification LBA range: start 0x0 length 0x400 00:22:17.819 Nvme5n1 : 1.12 233.29 14.58 0.00 0.00 247129.41 3659.09 246415.36 00:22:17.819 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.819 Verification LBA range: start 0x0 length 0x400 00:22:17.819 Nvme6n1 : 1.13 225.84 14.12 0.00 0.00 255694.93 16056.32 249910.61 00:22:17.819 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.819 Verification LBA range: start 0x0 length 0x400 00:22:17.819 Nvme7n1 : 1.16 276.14 17.26 0.00 0.00 206117.03 12124.16 265639.25 00:22:17.819 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.819 Verification LBA range: start 0x0 length 0x400 00:22:17.820 Nvme8n1 : 1.19 268.82 16.80 0.00 0.00 208545.96 16056.32 244667.73 00:22:17.820 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.820 Verification LBA range: start 0x0 length 0x400 00:22:17.820 Nvme9n1 : 1.20 270.63 16.91 0.00 0.00 203527.21 5816.32 242920.11 00:22:17.820 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.820 Verification LBA range: start 0x0 length 0x400 00:22:17.820 Nvme10n1 : 1.21 265.26 16.58 0.00 0.00 204170.15 10267.31 263891.63 00:22:17.820 [2024-10-30T13:09:16.119Z] =================================================================================================================== 00:22:17.820 [2024-10-30T13:09:16.119Z] Total : 2470.08 154.38 0.00 0.00 235467.77 3659.09 265639.25 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:18.081 rmmod nvme_tcp 00:22:18.081 rmmod nvme_fabrics 00:22:18.081 rmmod nvme_keyring 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1089564 ']' 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1089564 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1089564 ']' 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1089564 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1089564 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1089564' 00:22:18.081 killing process with pid 1089564 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1089564 00:22:18.081 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1089564 00:22:18.343 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:18.343 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:18.343 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:18.343 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:18.343 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:18.343 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:18.343 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:18.343 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:18.343 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:18.343 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.343 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.343 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:20.893 00:22:20.893 real 0m16.917s 00:22:20.893 user 0m34.068s 00:22:20.893 sys 0m7.044s 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:20.893 ************************************ 00:22:20.893 END TEST nvmf_shutdown_tc1 00:22:20.893 ************************************ 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:20.893 ************************************ 00:22:20.893 START TEST nvmf_shutdown_tc2 00:22:20.893 ************************************ 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:20.893 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:20.893 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:20.893 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:20.893 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:20.894 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:20.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:20.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:22:20.894 00:22:20.894 --- 10.0.0.2 ping statistics --- 00:22:20.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.894 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:20.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:20.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:22:20.894 00:22:20.894 --- 10.0.0.1 ping statistics --- 00:22:20.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.894 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:20.894 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:20.894 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:20.894 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:20.894 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:20.894 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:20.894 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1091807 00:22:20.894 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1091807 00:22:20.894 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:20.894 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1091807 ']' 00:22:20.894 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.894 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:20.894 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.894 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:20.894 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:20.894 [2024-10-30 14:09:19.101657] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:22:20.894 [2024-10-30 14:09:19.101723] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.155 [2024-10-30 14:09:19.195793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:21.155 [2024-10-30 14:09:19.229839] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.155 [2024-10-30 14:09:19.229868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.155 [2024-10-30 14:09:19.229874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.155 [2024-10-30 14:09:19.229879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.155 [2024-10-30 14:09:19.229886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.155 [2024-10-30 14:09:19.231446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.155 [2024-10-30 14:09:19.231600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:21.155 [2024-10-30 14:09:19.231713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.155 [2024-10-30 14:09:19.231716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:21.726 [2024-10-30 14:09:19.954446] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:21.726 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:21.726 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:21.726 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:21.726 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:21.726 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:21.726 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:21.726 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:21.726 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:21.726 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.726 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:21.986 Malloc1 00:22:21.986 [2024-10-30 14:09:20.071754] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.986 Malloc2 00:22:21.986 Malloc3 00:22:21.986 Malloc4 00:22:21.986 Malloc5 00:22:21.986 Malloc6 00:22:21.986 Malloc7 00:22:22.247 Malloc8 00:22:22.247 Malloc9 00:22:22.247 Malloc10 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1092193 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1092193 /var/tmp/bdevperf.sock 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1092193 ']' 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:22.247 { 00:22:22.247 "params": { 00:22:22.247 "name": "Nvme$subsystem", 00:22:22.247 "trtype": "$TEST_TRANSPORT", 00:22:22.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.247 "adrfam": "ipv4", 00:22:22.247 "trsvcid": "$NVMF_PORT", 00:22:22.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.247 "hdgst": ${hdgst:-false}, 00:22:22.247 "ddgst": ${ddgst:-false} 00:22:22.247 }, 00:22:22.247 "method": "bdev_nvme_attach_controller" 00:22:22.247 } 00:22:22.247 EOF 00:22:22.247 )") 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:22.247 { 00:22:22.247 "params": { 00:22:22.247 "name": "Nvme$subsystem", 00:22:22.247 "trtype": "$TEST_TRANSPORT", 00:22:22.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.247 "adrfam": "ipv4", 00:22:22.247 "trsvcid": "$NVMF_PORT", 00:22:22.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.247 "hdgst": ${hdgst:-false}, 00:22:22.247 "ddgst": ${ddgst:-false} 00:22:22.247 }, 00:22:22.247 "method": "bdev_nvme_attach_controller" 00:22:22.247 } 00:22:22.247 EOF 00:22:22.247 )") 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:22.247 { 00:22:22.247 "params": { 00:22:22.247 "name": "Nvme$subsystem", 00:22:22.247 "trtype": "$TEST_TRANSPORT", 00:22:22.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.247 "adrfam": "ipv4", 00:22:22.247 "trsvcid": "$NVMF_PORT", 00:22:22.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.247 "hdgst": ${hdgst:-false}, 00:22:22.247 "ddgst": ${ddgst:-false} 00:22:22.247 }, 00:22:22.247 "method": "bdev_nvme_attach_controller" 00:22:22.247 } 00:22:22.247 EOF 00:22:22.247 )") 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:22.247 { 00:22:22.247 "params": { 00:22:22.247 "name": "Nvme$subsystem", 00:22:22.247 "trtype": "$TEST_TRANSPORT", 00:22:22.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.247 "adrfam": "ipv4", 00:22:22.247 "trsvcid": "$NVMF_PORT", 00:22:22.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.247 "hdgst": ${hdgst:-false}, 00:22:22.247 "ddgst": ${ddgst:-false} 00:22:22.247 }, 00:22:22.247 "method": "bdev_nvme_attach_controller" 00:22:22.247 } 00:22:22.247 EOF 00:22:22.247 )") 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:22.247 { 00:22:22.247 "params": { 00:22:22.247 "name": "Nvme$subsystem", 00:22:22.247 "trtype": "$TEST_TRANSPORT", 00:22:22.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.247 "adrfam": "ipv4", 00:22:22.247 "trsvcid": "$NVMF_PORT", 00:22:22.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.247 "hdgst": ${hdgst:-false}, 00:22:22.247 "ddgst": ${ddgst:-false} 00:22:22.247 }, 00:22:22.247 "method": "bdev_nvme_attach_controller" 00:22:22.247 } 00:22:22.247 EOF 00:22:22.247 )") 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:22.247 { 00:22:22.247 "params": { 00:22:22.247 "name": "Nvme$subsystem", 00:22:22.247 "trtype": "$TEST_TRANSPORT", 00:22:22.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.247 "adrfam": "ipv4", 00:22:22.247 "trsvcid": "$NVMF_PORT", 00:22:22.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.247 "hdgst": ${hdgst:-false}, 00:22:22.247 "ddgst": ${ddgst:-false} 00:22:22.247 }, 00:22:22.247 "method": "bdev_nvme_attach_controller" 00:22:22.247 } 00:22:22.247 EOF 00:22:22.247 )") 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:22.247 [2024-10-30 14:09:20.515978] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:22:22.247 [2024-10-30 14:09:20.516033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1092193 ] 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:22.247 { 00:22:22.247 "params": { 00:22:22.247 "name": "Nvme$subsystem", 00:22:22.247 "trtype": "$TEST_TRANSPORT", 00:22:22.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.247 "adrfam": "ipv4", 00:22:22.247 "trsvcid": "$NVMF_PORT", 00:22:22.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.247 "hdgst": ${hdgst:-false}, 00:22:22.247 "ddgst": ${ddgst:-false} 00:22:22.247 }, 00:22:22.247 "method": "bdev_nvme_attach_controller" 00:22:22.247 } 00:22:22.247 EOF 00:22:22.247 )") 00:22:22.247 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:22.248 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:22.248 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:22.248 { 00:22:22.248 "params": { 00:22:22.248 "name": "Nvme$subsystem", 00:22:22.248 "trtype": "$TEST_TRANSPORT", 00:22:22.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.248 "adrfam": "ipv4", 00:22:22.248 "trsvcid": "$NVMF_PORT", 00:22:22.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.248 "hdgst": ${hdgst:-false}, 00:22:22.248 "ddgst": ${ddgst:-false} 00:22:22.248 }, 00:22:22.248 "method": "bdev_nvme_attach_controller" 00:22:22.248 } 00:22:22.248 EOF 00:22:22.248 )") 00:22:22.248 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:22.248 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:22.248 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:22.248 { 00:22:22.248 "params": { 00:22:22.248 "name": "Nvme$subsystem", 00:22:22.248 "trtype": "$TEST_TRANSPORT", 00:22:22.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.248 "adrfam": "ipv4", 00:22:22.248 "trsvcid": "$NVMF_PORT", 00:22:22.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.248 "hdgst": ${hdgst:-false}, 00:22:22.248 "ddgst": ${ddgst:-false} 00:22:22.248 }, 00:22:22.248 "method": "bdev_nvme_attach_controller" 00:22:22.248 } 00:22:22.248 EOF 00:22:22.248 )") 00:22:22.248 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:22.248 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:22.248 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:22.248 { 00:22:22.248 "params": { 00:22:22.248 "name": "Nvme$subsystem", 00:22:22.248 "trtype": "$TEST_TRANSPORT", 00:22:22.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.248 "adrfam": "ipv4", 00:22:22.248 "trsvcid": "$NVMF_PORT", 00:22:22.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.248 "hdgst": ${hdgst:-false}, 00:22:22.248 "ddgst": ${ddgst:-false} 00:22:22.248 }, 00:22:22.248 "method": "bdev_nvme_attach_controller" 00:22:22.248 } 00:22:22.248 EOF 00:22:22.248 )") 00:22:22.248 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:22.509 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:22.509 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:22.509 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:22.509 "params": { 00:22:22.509 "name": "Nvme1", 00:22:22.509 "trtype": "tcp", 00:22:22.509 "traddr": "10.0.0.2", 00:22:22.509 "adrfam": "ipv4", 00:22:22.509 "trsvcid": "4420", 00:22:22.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.509 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:22.509 "hdgst": false, 00:22:22.509 "ddgst": false 00:22:22.509 }, 00:22:22.509 "method": "bdev_nvme_attach_controller" 00:22:22.509 },{ 00:22:22.509 "params": { 00:22:22.509 "name": "Nvme2", 00:22:22.509 "trtype": "tcp", 00:22:22.509 "traddr": "10.0.0.2", 00:22:22.509 "adrfam": "ipv4", 00:22:22.509 "trsvcid": "4420", 00:22:22.509 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:22.509 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:22.509 "hdgst": false, 00:22:22.509 "ddgst": false 00:22:22.509 }, 00:22:22.509 "method": "bdev_nvme_attach_controller" 00:22:22.509 },{ 00:22:22.509 "params": { 00:22:22.509 "name": "Nvme3", 00:22:22.509 "trtype": "tcp", 00:22:22.509 "traddr": "10.0.0.2", 00:22:22.509 "adrfam": "ipv4", 00:22:22.509 "trsvcid": "4420", 00:22:22.509 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:22.509 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:22.509 "hdgst": false, 00:22:22.509 "ddgst": false 00:22:22.509 }, 00:22:22.509 "method": "bdev_nvme_attach_controller" 00:22:22.509 },{ 00:22:22.509 "params": { 00:22:22.509 "name": "Nvme4", 00:22:22.509 "trtype": "tcp", 00:22:22.509 "traddr": "10.0.0.2", 00:22:22.509 "adrfam": "ipv4", 00:22:22.509 "trsvcid": "4420", 00:22:22.509 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:22.509 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:22.509 "hdgst": false, 00:22:22.509 "ddgst": false 00:22:22.509 }, 00:22:22.509 "method": "bdev_nvme_attach_controller" 00:22:22.509 },{ 00:22:22.509 "params": { 00:22:22.509 "name": "Nvme5", 00:22:22.509 "trtype": "tcp", 00:22:22.509 "traddr": "10.0.0.2", 00:22:22.509 "adrfam": "ipv4", 00:22:22.509 "trsvcid": "4420", 00:22:22.509 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:22.509 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:22.509 "hdgst": false, 00:22:22.509 "ddgst": false 00:22:22.509 }, 00:22:22.509 "method": "bdev_nvme_attach_controller" 00:22:22.509 },{ 00:22:22.509 "params": { 00:22:22.509 "name": "Nvme6", 00:22:22.509 "trtype": "tcp", 00:22:22.509 "traddr": "10.0.0.2", 00:22:22.509 "adrfam": "ipv4", 00:22:22.509 "trsvcid": "4420", 00:22:22.509 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:22.509 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:22.509 "hdgst": false, 00:22:22.509 "ddgst": false 00:22:22.509 }, 00:22:22.509 "method": "bdev_nvme_attach_controller" 00:22:22.509 },{ 00:22:22.509 "params": { 00:22:22.509 "name": "Nvme7", 00:22:22.509 "trtype": "tcp", 00:22:22.509 "traddr": "10.0.0.2", 00:22:22.509 "adrfam": "ipv4", 00:22:22.509 "trsvcid": "4420", 00:22:22.509 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:22.509 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:22.509 "hdgst": false, 00:22:22.509 "ddgst": false 00:22:22.509 }, 00:22:22.509 "method": "bdev_nvme_attach_controller" 00:22:22.509 },{ 00:22:22.509 "params": { 00:22:22.509 "name": "Nvme8", 00:22:22.509 "trtype": "tcp", 00:22:22.509 "traddr": "10.0.0.2", 00:22:22.509 "adrfam": "ipv4", 00:22:22.509 "trsvcid": "4420", 00:22:22.509 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:22.509 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:22.509 "hdgst": false, 00:22:22.509 "ddgst": false 00:22:22.509 }, 00:22:22.509 "method": "bdev_nvme_attach_controller" 00:22:22.509 },{ 00:22:22.509 "params": { 00:22:22.509 "name": "Nvme9", 00:22:22.509 "trtype": "tcp", 00:22:22.509 "traddr": "10.0.0.2", 00:22:22.509 "adrfam": "ipv4", 00:22:22.509 "trsvcid": "4420", 00:22:22.509 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:22.509 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:22.509 "hdgst": false, 00:22:22.509 "ddgst": false 00:22:22.509 }, 00:22:22.509 "method": "bdev_nvme_attach_controller" 00:22:22.509 },{ 00:22:22.509 "params": { 00:22:22.509 "name": "Nvme10", 00:22:22.509 "trtype": "tcp", 00:22:22.509 "traddr": "10.0.0.2", 00:22:22.509 "adrfam": "ipv4", 00:22:22.509 "trsvcid": "4420", 00:22:22.509 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:22.509 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:22.509 "hdgst": false, 00:22:22.509 "ddgst": false 00:22:22.509 }, 00:22:22.509 "method": "bdev_nvme_attach_controller" 00:22:22.509 }' 00:22:22.509 [2024-10-30 14:09:20.605751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.509 [2024-10-30 14:09:20.641944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.899 Running I/O for 10 seconds... 00:22:23.899 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:23.899 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:23.899 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:23.899 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.899 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:23.899 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.899 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:23.899 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:23.899 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:23.899 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:23.899 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:23.899 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:23.899 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:23.899 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:23.899 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:23.899 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.899 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:24.159 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.159 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:24.159 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:24.159 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:24.420 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:24.420 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:24.420 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:24.420 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:24.420 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.420 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:24.420 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.420 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:24.420 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:24.420 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:24.681 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:24.681 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:24.681 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:24.681 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:24.681 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.681 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:24.681 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.681 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:24.681 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:24.681 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:24.681 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:24.681 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:24.681 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1092193 00:22:24.681 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1092193 ']' 00:22:24.681 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1092193 00:22:24.681 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:24.681 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:24.681 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1092193 00:22:24.681 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:24.681 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:24.681 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1092193' 00:22:24.681 killing process with pid 1092193 00:22:24.681 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1092193 00:22:24.681 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1092193 00:22:24.941 Received shutdown signal, test time was about 0.972205 seconds 00:22:24.941 00:22:24.941 Latency(us) 00:22:24.941 [2024-10-30T13:09:23.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.941 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:24.941 Verification LBA range: start 0x0 length 0x400 00:22:24.941 Nvme1n1 : 0.94 203.90 12.74 0.00 0.00 310318.08 14964.05 258648.75 00:22:24.941 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:24.941 Verification LBA range: start 0x0 length 0x400 00:22:24.941 Nvme2n1 : 0.96 265.32 16.58 0.00 0.00 233454.93 14636.37 286610.77 00:22:24.941 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:24.941 Verification LBA range: start 0x0 length 0x400 00:22:24.941 Nvme3n1 : 0.97 265.06 16.57 0.00 0.00 228817.49 39540.05 222822.40 00:22:24.941 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:24.941 Verification LBA range: start 0x0 length 0x400 00:22:24.941 Nvme4n1 : 0.96 266.45 16.65 0.00 0.00 223049.81 34078.72 265639.25 00:22:24.941 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:24.941 Verification LBA range: start 0x0 length 0x400 00:22:24.942 Nvme5n1 : 0.95 268.12 16.76 0.00 0.00 216781.23 24466.77 208841.39 00:22:24.942 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:24.942 Verification LBA range: start 0x0 length 0x400 00:22:24.942 Nvme6n1 : 0.94 204.80 12.80 0.00 0.00 277369.17 21626.88 244667.73 00:22:24.942 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:24.942 Verification LBA range: start 0x0 length 0x400 00:22:24.942 Nvme7n1 : 0.97 252.24 15.76 0.00 0.00 219334.73 14964.05 253405.87 00:22:24.942 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:24.942 Verification LBA range: start 0x0 length 0x400 00:22:24.942 Nvme8n1 : 0.96 267.27 16.70 0.00 0.00 203599.15 18677.76 249910.61 00:22:24.942 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:24.942 Verification LBA range: start 0x0 length 0x400 00:22:24.942 Nvme9n1 : 0.95 202.52 12.66 0.00 0.00 261894.83 22500.69 249910.61 00:22:24.942 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:24.942 Verification LBA range: start 0x0 length 0x400 00:22:24.942 Nvme10n1 : 0.95 201.90 12.62 0.00 0.00 256745.53 19333.12 272629.76 00:22:24.942 [2024-10-30T13:09:23.241Z] =================================================================================================================== 00:22:24.942 [2024-10-30T13:09:23.241Z] Total : 2397.57 149.85 0.00 0.00 239516.69 14636.37 286610.77 00:22:24.942 14:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:25.882 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1091807 00:22:25.882 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:25.882 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:25.882 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:25.882 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:25.882 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:25.882 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:25.882 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:25.882 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:25.882 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:25.882 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:25.882 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:25.882 rmmod nvme_tcp 00:22:25.882 rmmod nvme_fabrics 00:22:25.882 rmmod nvme_keyring 00:22:25.882 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:25.882 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:25.882 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:25.882 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1091807 ']' 00:22:25.882 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1091807 00:22:25.882 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1091807 ']' 00:22:25.882 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1091807 00:22:26.141 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:26.141 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:26.141 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1091807 00:22:26.141 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:26.141 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:26.141 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1091807' 00:22:26.141 killing process with pid 1091807 00:22:26.141 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1091807 00:22:26.141 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1091807 00:22:26.401 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:26.401 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:26.401 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:26.401 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:26.401 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:26.401 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:26.401 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:26.401 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:26.401 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:26.401 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.401 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.401 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.311 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:28.311 00:22:28.311 real 0m7.859s 00:22:28.311 user 0m23.772s 00:22:28.311 sys 0m1.281s 00:22:28.311 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:28.311 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:28.311 ************************************ 00:22:28.311 END TEST nvmf_shutdown_tc2 00:22:28.311 ************************************ 00:22:28.311 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:28.311 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:28.311 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:28.311 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:28.572 ************************************ 00:22:28.572 START TEST nvmf_shutdown_tc3 00:22:28.572 ************************************ 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:28.572 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:28.572 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:28.572 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:28.572 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:28.572 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.573 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.573 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.573 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.573 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:28.573 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.832 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.832 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.832 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:28.832 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:28.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:22:28.832 00:22:28.832 --- 10.0.0.2 ping statistics --- 00:22:28.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.832 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:22:28.832 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:22:28.832 00:22:28.832 --- 10.0.0.1 ping statistics --- 00:22:28.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.832 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:22:28.832 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.832 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:28.833 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:28.833 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.833 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:28.833 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:28.833 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.833 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:28.833 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:28.833 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:28.833 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:28.833 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:28.833 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:28.833 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1093631 00:22:28.833 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1093631 00:22:28.833 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:28.833 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1093631 ']' 00:22:28.833 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.833 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:28.833 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.833 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:28.833 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:28.833 [2024-10-30 14:09:27.064814] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:22:28.833 [2024-10-30 14:09:27.064868] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.092 [2024-10-30 14:09:27.147217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:29.092 [2024-10-30 14:09:27.178119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.092 [2024-10-30 14:09:27.178147] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.092 [2024-10-30 14:09:27.178152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.092 [2024-10-30 14:09:27.178157] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.092 [2024-10-30 14:09:27.178164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.092 [2024-10-30 14:09:27.179391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.092 [2024-10-30 14:09:27.179542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:29.092 [2024-10-30 14:09:27.179562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:29.092 [2024-10-30 14:09:27.179564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:29.661 [2024-10-30 14:09:27.919764] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:29.661 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:29.921 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:29.921 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:29.921 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:29.921 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:29.921 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:29.921 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:29.921 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:29.921 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:29.921 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:29.921 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.921 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:29.921 Malloc1 00:22:29.921 [2024-10-30 14:09:28.033581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.921 Malloc2 00:22:29.921 Malloc3 00:22:29.921 Malloc4 00:22:29.921 Malloc5 00:22:29.921 Malloc6 00:22:30.182 Malloc7 00:22:30.182 Malloc8 00:22:30.182 Malloc9 00:22:30.182 Malloc10 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1093871 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1093871 /var/tmp/bdevperf.sock 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1093871 ']' 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:30.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:30.182 { 00:22:30.182 "params": { 00:22:30.182 "name": "Nvme$subsystem", 00:22:30.182 "trtype": "$TEST_TRANSPORT", 00:22:30.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.182 "adrfam": "ipv4", 00:22:30.182 "trsvcid": "$NVMF_PORT", 00:22:30.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.182 "hdgst": ${hdgst:-false}, 00:22:30.182 "ddgst": ${ddgst:-false} 00:22:30.182 }, 00:22:30.182 "method": "bdev_nvme_attach_controller" 00:22:30.182 } 00:22:30.182 EOF 00:22:30.182 )") 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:30.182 { 00:22:30.182 "params": { 00:22:30.182 "name": "Nvme$subsystem", 00:22:30.182 "trtype": "$TEST_TRANSPORT", 00:22:30.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.182 "adrfam": "ipv4", 00:22:30.182 "trsvcid": "$NVMF_PORT", 00:22:30.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.182 "hdgst": ${hdgst:-false}, 00:22:30.182 "ddgst": ${ddgst:-false} 00:22:30.182 }, 00:22:30.182 "method": "bdev_nvme_attach_controller" 00:22:30.182 } 00:22:30.182 EOF 00:22:30.182 )") 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:30.182 { 00:22:30.182 "params": { 00:22:30.182 "name": "Nvme$subsystem", 00:22:30.182 "trtype": "$TEST_TRANSPORT", 00:22:30.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.182 "adrfam": "ipv4", 00:22:30.182 "trsvcid": "$NVMF_PORT", 00:22:30.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.182 "hdgst": ${hdgst:-false}, 00:22:30.182 "ddgst": ${ddgst:-false} 00:22:30.182 }, 00:22:30.182 "method": "bdev_nvme_attach_controller" 00:22:30.182 } 00:22:30.182 EOF 00:22:30.182 )") 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:30.182 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:30.183 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:30.183 { 00:22:30.183 "params": { 00:22:30.183 "name": "Nvme$subsystem", 00:22:30.183 "trtype": "$TEST_TRANSPORT", 00:22:30.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.183 "adrfam": "ipv4", 00:22:30.183 "trsvcid": "$NVMF_PORT", 00:22:30.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.183 "hdgst": ${hdgst:-false}, 00:22:30.183 "ddgst": ${ddgst:-false} 00:22:30.183 }, 00:22:30.183 "method": "bdev_nvme_attach_controller" 00:22:30.183 } 00:22:30.183 EOF 00:22:30.183 )") 00:22:30.183 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:30.183 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:30.183 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:30.183 { 00:22:30.183 "params": { 00:22:30.183 "name": "Nvme$subsystem", 00:22:30.183 "trtype": "$TEST_TRANSPORT", 00:22:30.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.183 "adrfam": "ipv4", 00:22:30.183 "trsvcid": "$NVMF_PORT", 00:22:30.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.183 "hdgst": ${hdgst:-false}, 00:22:30.183 "ddgst": ${ddgst:-false} 00:22:30.183 }, 00:22:30.183 "method": "bdev_nvme_attach_controller" 00:22:30.183 } 00:22:30.183 EOF 00:22:30.183 )") 00:22:30.183 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:30.183 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:30.183 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:30.183 { 00:22:30.183 "params": { 00:22:30.183 "name": "Nvme$subsystem", 00:22:30.183 "trtype": "$TEST_TRANSPORT", 00:22:30.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.183 "adrfam": "ipv4", 00:22:30.183 "trsvcid": "$NVMF_PORT", 00:22:30.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.183 "hdgst": ${hdgst:-false}, 00:22:30.183 "ddgst": ${ddgst:-false} 00:22:30.183 }, 00:22:30.183 "method": "bdev_nvme_attach_controller" 00:22:30.183 } 00:22:30.183 EOF 00:22:30.183 )") 00:22:30.183 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:30.183 [2024-10-30 14:09:28.481464] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:22:30.183 [2024-10-30 14:09:28.481517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1093871 ] 00:22:30.444 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:30.444 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:30.444 { 00:22:30.444 "params": { 00:22:30.444 "name": "Nvme$subsystem", 00:22:30.444 "trtype": "$TEST_TRANSPORT", 00:22:30.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.444 "adrfam": "ipv4", 00:22:30.444 "trsvcid": "$NVMF_PORT", 00:22:30.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.444 "hdgst": ${hdgst:-false}, 00:22:30.444 "ddgst": ${ddgst:-false} 00:22:30.444 }, 00:22:30.444 "method": "bdev_nvme_attach_controller" 00:22:30.444 } 00:22:30.444 EOF 00:22:30.444 )") 00:22:30.444 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:30.444 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:30.444 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:30.444 { 00:22:30.444 "params": { 00:22:30.444 "name": "Nvme$subsystem", 00:22:30.444 "trtype": "$TEST_TRANSPORT", 00:22:30.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.444 "adrfam": "ipv4", 00:22:30.444 "trsvcid": "$NVMF_PORT", 00:22:30.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.444 "hdgst": ${hdgst:-false}, 00:22:30.444 "ddgst": ${ddgst:-false} 00:22:30.444 }, 00:22:30.444 "method": "bdev_nvme_attach_controller" 00:22:30.444 } 00:22:30.444 EOF 00:22:30.444 )") 00:22:30.444 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:30.444 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:30.444 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:30.444 { 00:22:30.444 "params": { 00:22:30.444 "name": "Nvme$subsystem", 00:22:30.444 "trtype": "$TEST_TRANSPORT", 00:22:30.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.444 "adrfam": "ipv4", 00:22:30.444 "trsvcid": "$NVMF_PORT", 00:22:30.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.444 "hdgst": ${hdgst:-false}, 00:22:30.444 "ddgst": ${ddgst:-false} 00:22:30.444 }, 00:22:30.444 "method": "bdev_nvme_attach_controller" 00:22:30.444 } 00:22:30.444 EOF 00:22:30.444 )") 00:22:30.444 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:30.445 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:30.445 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:30.445 { 00:22:30.445 "params": { 00:22:30.445 "name": "Nvme$subsystem", 00:22:30.445 "trtype": "$TEST_TRANSPORT", 00:22:30.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.445 "adrfam": "ipv4", 00:22:30.445 "trsvcid": "$NVMF_PORT", 00:22:30.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.445 "hdgst": ${hdgst:-false}, 00:22:30.445 "ddgst": ${ddgst:-false} 00:22:30.445 }, 00:22:30.445 "method": "bdev_nvme_attach_controller" 00:22:30.445 } 00:22:30.445 EOF 00:22:30.445 )") 00:22:30.445 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:30.445 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:30.445 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:30.445 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:30.445 "params": { 00:22:30.445 "name": "Nvme1", 00:22:30.445 "trtype": "tcp", 00:22:30.445 "traddr": "10.0.0.2", 00:22:30.445 "adrfam": "ipv4", 00:22:30.445 "trsvcid": "4420", 00:22:30.445 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.445 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:30.445 "hdgst": false, 00:22:30.445 "ddgst": false 00:22:30.445 }, 00:22:30.445 "method": "bdev_nvme_attach_controller" 00:22:30.445 },{ 00:22:30.445 "params": { 00:22:30.445 "name": "Nvme2", 00:22:30.445 "trtype": "tcp", 00:22:30.445 "traddr": "10.0.0.2", 00:22:30.445 "adrfam": "ipv4", 00:22:30.445 "trsvcid": "4420", 00:22:30.445 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:30.445 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:30.445 "hdgst": false, 00:22:30.445 "ddgst": false 00:22:30.445 }, 00:22:30.445 "method": "bdev_nvme_attach_controller" 00:22:30.445 },{ 00:22:30.445 "params": { 00:22:30.445 "name": "Nvme3", 00:22:30.445 "trtype": "tcp", 00:22:30.445 "traddr": "10.0.0.2", 00:22:30.445 "adrfam": "ipv4", 00:22:30.445 "trsvcid": "4420", 00:22:30.445 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:30.445 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:30.445 "hdgst": false, 00:22:30.445 "ddgst": false 00:22:30.445 }, 00:22:30.445 "method": "bdev_nvme_attach_controller" 00:22:30.445 },{ 00:22:30.445 "params": { 00:22:30.445 "name": "Nvme4", 00:22:30.445 "trtype": "tcp", 00:22:30.445 "traddr": "10.0.0.2", 00:22:30.445 "adrfam": "ipv4", 00:22:30.445 "trsvcid": "4420", 00:22:30.445 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:30.445 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:30.445 "hdgst": false, 00:22:30.445 "ddgst": false 00:22:30.445 }, 00:22:30.445 "method": "bdev_nvme_attach_controller" 00:22:30.445 },{ 00:22:30.445 "params": { 00:22:30.445 "name": "Nvme5", 00:22:30.445 "trtype": "tcp", 00:22:30.445 "traddr": "10.0.0.2", 00:22:30.445 "adrfam": "ipv4", 00:22:30.445 "trsvcid": "4420", 00:22:30.445 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:30.445 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:30.445 "hdgst": false, 00:22:30.445 "ddgst": false 00:22:30.445 }, 00:22:30.445 "method": "bdev_nvme_attach_controller" 00:22:30.445 },{ 00:22:30.445 "params": { 00:22:30.445 "name": "Nvme6", 00:22:30.445 "trtype": "tcp", 00:22:30.445 "traddr": "10.0.0.2", 00:22:30.445 "adrfam": "ipv4", 00:22:30.445 "trsvcid": "4420", 00:22:30.445 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:30.445 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:30.445 "hdgst": false, 00:22:30.445 "ddgst": false 00:22:30.445 }, 00:22:30.445 "method": "bdev_nvme_attach_controller" 00:22:30.445 },{ 00:22:30.445 "params": { 00:22:30.445 "name": "Nvme7", 00:22:30.445 "trtype": "tcp", 00:22:30.445 "traddr": "10.0.0.2", 00:22:30.445 "adrfam": "ipv4", 00:22:30.445 "trsvcid": "4420", 00:22:30.445 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:30.445 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:30.445 "hdgst": false, 00:22:30.445 "ddgst": false 00:22:30.445 }, 00:22:30.445 "method": "bdev_nvme_attach_controller" 00:22:30.445 },{ 00:22:30.445 "params": { 00:22:30.445 "name": "Nvme8", 00:22:30.445 "trtype": "tcp", 00:22:30.445 "traddr": "10.0.0.2", 00:22:30.445 "adrfam": "ipv4", 00:22:30.445 "trsvcid": "4420", 00:22:30.445 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:30.445 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:30.445 "hdgst": false, 00:22:30.445 "ddgst": false 00:22:30.445 }, 00:22:30.445 "method": "bdev_nvme_attach_controller" 00:22:30.445 },{ 00:22:30.445 "params": { 00:22:30.445 "name": "Nvme9", 00:22:30.445 "trtype": "tcp", 00:22:30.445 "traddr": "10.0.0.2", 00:22:30.445 "adrfam": "ipv4", 00:22:30.445 "trsvcid": "4420", 00:22:30.445 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:30.445 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:30.445 "hdgst": false, 00:22:30.445 "ddgst": false 00:22:30.445 }, 00:22:30.445 "method": "bdev_nvme_attach_controller" 00:22:30.445 },{ 00:22:30.445 "params": { 00:22:30.445 "name": "Nvme10", 00:22:30.445 "trtype": "tcp", 00:22:30.445 "traddr": "10.0.0.2", 00:22:30.445 "adrfam": "ipv4", 00:22:30.445 "trsvcid": "4420", 00:22:30.445 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:30.445 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:30.445 "hdgst": false, 00:22:30.445 "ddgst": false 00:22:30.445 }, 00:22:30.445 "method": "bdev_nvme_attach_controller" 00:22:30.445 }' 00:22:30.445 [2024-10-30 14:09:28.570988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.445 [2024-10-30 14:09:28.607279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.828 Running I/O for 10 seconds... 00:22:31.828 14:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:31.828 14:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:31.828 14:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:31.828 14:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.828 14:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:31.829 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.829 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:31.829 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:31.829 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:31.829 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:31.829 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:31.829 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:31.829 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:31.829 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:31.829 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:31.829 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:31.829 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.829 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:31.829 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.829 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:31.829 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:31.829 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:32.088 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:32.088 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:32.088 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:32.088 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:32.088 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.088 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:32.349 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.349 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:32.349 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:32.349 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:32.622 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:32.622 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:32.622 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:32.622 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:32.622 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.622 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:32.622 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.622 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=135 00:22:32.622 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 135 -ge 100 ']' 00:22:32.622 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:32.622 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:32.622 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:32.622 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1093631 00:22:32.622 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1093631 ']' 00:22:32.622 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1093631 00:22:32.622 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:22:32.622 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.622 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1093631 00:22:32.622 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:32.622 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:32.622 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1093631' 00:22:32.622 killing process with pid 1093631 00:22:32.622 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1093631 00:22:32.622 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1093631 00:22:32.622 [2024-10-30 14:09:30.836454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.622 [2024-10-30 14:09:30.836666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.836809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce340 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.623 [2024-10-30 14:09:30.838821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.838826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.838830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.838835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.838840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.838844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.838849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc7e0 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.839880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.839895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.839901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.839905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.839910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.839915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.839920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.839925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.839929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.839934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.839939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.839943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.839948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.839953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.839957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.839962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.839966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.839971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.839975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.839980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.839985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.839989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.839994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.839998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.840179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ce810 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.841287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.841314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.841321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.841326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.841330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.841335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.841341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.841346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.841351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.841356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.841360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.841365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.841370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.624 [2024-10-30 14:09:30.841375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.841617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cece0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.625 [2024-10-30 14:09:30.842829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.842947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf1d0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.843480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf6a0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.843496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cf6a0 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.626 [2024-10-30 14:09:30.844999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0040 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.845996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.846000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.846005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.846009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.846014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.846019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.846023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.846030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.846035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.846040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.846045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0530 is same with the state(6) to be set 00:22:32.627 [2024-10-30 14:09:30.846506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.846519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.846524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.846529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.846533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.846538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.846543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.846547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.846552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.846557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.846561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.846566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.846570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.846575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.846579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.846586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.846591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.846596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.859559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0a00 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.861086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.628 [2024-10-30 14:09:30.861123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.628 [2024-10-30 14:09:30.861133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.628 [2024-10-30 14:09:30.861143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.628 [2024-10-30 14:09:30.861156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.628 [2024-10-30 14:09:30.861168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.628 [2024-10-30 14:09:30.861186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.628 [2024-10-30 14:09:30.861196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.628 [2024-10-30 14:09:30.861204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8eb0 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.861236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.628 [2024-10-30 14:09:30.861245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.628 [2024-10-30 14:09:30.861254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.628 [2024-10-30 14:09:30.861261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.628 [2024-10-30 14:09:30.861269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.628 [2024-10-30 14:09:30.861276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.628 [2024-10-30 14:09:30.861285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.628 [2024-10-30 14:09:30.861292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.628 [2024-10-30 14:09:30.861299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a50 is same with the state(6) to be set 00:22:32.628 [2024-10-30 14:09:30.861324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.628 [2024-10-30 14:09:30.861337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.628 [2024-10-30 14:09:30.861345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.628 [2024-10-30 14:09:30.861352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.628 [2024-10-30 14:09:30.861361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.628 [2024-10-30 14:09:30.861368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160d2b0 is same with the state(6) to be set 00:22:32.629 [2024-10-30 14:09:30.861413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1655d30 is same with the state(6) to be set 00:22:32.629 [2024-10-30 14:09:30.861504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ded40 is same with the state(6) to be set 00:22:32.629 [2024-10-30 14:09:30.861593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623750 is same with the state(6) to be set 00:22:32.629 [2024-10-30 14:09:30.861678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e68b0 is same with the state(6) to be set 00:22:32.629 [2024-10-30 14:09:30.861783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e0060 is same with the state(6) to be set 00:22:32.629 [2024-10-30 14:09:30.861873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e71c0 is same with the state(6) to be set 00:22:32.629 [2024-10-30 14:09:30.861958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.861992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.861999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.862007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.629 [2024-10-30 14:09:30.862014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.862021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c750 is same with the state(6) to be set 00:22:32.629 [2024-10-30 14:09:30.862489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.629 [2024-10-30 14:09:30.862510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.862524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.629 [2024-10-30 14:09:30.862532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.862543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.629 [2024-10-30 14:09:30.862550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.862561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.629 [2024-10-30 14:09:30.862568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.862577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.629 [2024-10-30 14:09:30.862585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.862594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.629 [2024-10-30 14:09:30.862602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.862612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.629 [2024-10-30 14:09:30.862619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.862629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.629 [2024-10-30 14:09:30.862636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.862646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.629 [2024-10-30 14:09:30.862657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.629 [2024-10-30 14:09:30.862667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.862674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.862684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.862695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.862705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.862712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.862722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.862729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.862738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.862752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.862762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.862769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.862779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.862786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.862795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.862803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.862812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.862819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.862829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.862836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.862845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.862853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.862863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.862870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.862881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.862889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.862898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.862906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.862915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.862922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.862932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.862939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.862949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.862956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.862965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.862973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.862982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.862990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.862999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.863006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.863015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.863023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.863032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.863039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.863049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.863056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.863066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.863073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.863083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.863091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.863101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.863109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.863118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.863125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.863135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.863142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.863151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.863158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.863168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.863175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.863184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.863192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.863201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.863208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.863217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.863224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.863234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.863241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.863250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.863258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.863267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.863274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.863284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.863291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.863303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.630 [2024-10-30 14:09:30.863310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.630 [2024-10-30 14:09:30.863320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:32.631 [2024-10-30 14:09:30.863704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.863992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.863999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.631 [2024-10-30 14:09:30.864009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.631 [2024-10-30 14:09:30.864017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.864026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.864034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.864046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.864054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.864064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.864071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.864081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.864088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.864098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.864106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.864116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.864124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.864133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.864140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.864150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.864157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.864167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.864174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.864184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.864192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.864201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.864209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.864218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.864226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.864236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.864243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.864253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.864262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.864272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.864279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.864289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.864296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.864305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.864313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.864323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.864330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.869695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.869728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.869740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.869754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.869764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.869772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.869782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.869790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.869799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.869807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.869817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.869824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.869833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.869841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.869850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.869858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.869872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.869879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.869889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.869896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.869906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.869913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.869923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.869930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.869940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.869947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.869957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.869965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.869975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.869983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.869992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.869999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.870009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.870017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.870026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.870034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.870043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.870051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.870060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.870067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.870077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.870086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.870096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.632 [2024-10-30 14:09:30.870104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.632 [2024-10-30 14:09:30.870113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.870121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.870131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.870138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.870148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.870155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.870165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.870172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.870182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.870189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.870199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.870206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.870215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ece90 is same with the state(6) to be set 00:22:32.633 [2024-10-30 14:09:30.873180] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:32.633 [2024-10-30 14:09:30.873213] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:32.633 [2024-10-30 14:09:30.873231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e71c0 (9): Bad file descriptor 00:22:32.633 [2024-10-30 14:09:30.873246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x160d2b0 (9): Bad file descriptor 00:22:32.633 [2024-10-30 14:09:30.873267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e8eb0 (9): Bad file descriptor 00:22:32.633 [2024-10-30 14:09:30.873293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e8a50 (9): Bad file descriptor 00:22:32.633 [2024-10-30 14:09:30.873318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1655d30 (9): Bad file descriptor 00:22:32.633 [2024-10-30 14:09:30.873339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ded40 (9): Bad file descriptor 00:22:32.633 [2024-10-30 14:09:30.873360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1623750 (9): Bad file descriptor 00:22:32.633 [2024-10-30 14:09:30.873380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e68b0 (9): Bad file descriptor 00:22:32.633 [2024-10-30 14:09:30.873401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e0060 (9): Bad file descriptor 00:22:32.633 [2024-10-30 14:09:30.873418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x160c750 (9): Bad file descriptor 00:22:32.633 [2024-10-30 14:09:30.873872] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:32.633 [2024-10-30 14:09:30.873957] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:32.633 [2024-10-30 14:09:30.874668] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:32.633 [2024-10-30 14:09:30.874715] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:32.633 [2024-10-30 14:09:30.875266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.633 [2024-10-30 14:09:30.875307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160d2b0 with addr=10.0.0.2, port=4420 00:22:32.633 [2024-10-30 14:09:30.875319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160d2b0 is same with the state(6) to be set 00:22:32.633 [2024-10-30 14:09:30.875506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.633 [2024-10-30 14:09:30.875518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e71c0 with addr=10.0.0.2, port=4420 00:22:32.633 [2024-10-30 14:09:30.875525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e71c0 is same with the state(6) to be set 00:22:32.633 [2024-10-30 14:09:30.875602] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:32.633 [2024-10-30 14:09:30.875647] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:32.633 [2024-10-30 14:09:30.875685] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:32.633 [2024-10-30 14:09:30.875742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.875764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.875781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.875789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.875800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.875808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.875818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.875825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.875835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.875842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.875852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.875864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.875876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.875884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.875902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.875910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.875919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.875926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.875936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.875945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.875955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.875963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.875972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.875980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.875994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.876001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.876012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.876019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.876029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.876036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.876046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.876053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.876063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.876070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.876079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.876087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.876097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.876104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.876114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.876123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.876133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.876140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.876150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.876157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.876167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.876174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.876184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.876192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.876201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.876208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.876218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.876225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.633 [2024-10-30 14:09:30.876235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.633 [2024-10-30 14:09:30.876243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.876884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.876893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ee410 is same with the state(6) to be set 00:22:32.634 [2024-10-30 14:09:30.876986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x160d2b0 (9): Bad file descriptor 00:22:32.634 [2024-10-30 14:09:30.877000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e71c0 (9): Bad file descriptor 00:22:32.634 [2024-10-30 14:09:30.894975] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:32.634 task offset: 24832 on job bdev=Nvme5n1 fails 00:22:32.634 1812.51 IOPS, 113.28 MiB/s [2024-10-30T13:09:30.933Z] [2024-10-30 14:09:30.895033] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:32.634 [2024-10-30 14:09:30.895043] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:32.634 [2024-10-30 14:09:30.895053] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:32.634 [2024-10-30 14:09:30.895069] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:32.634 [2024-10-30 14:09:30.895076] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:32.634 [2024-10-30 14:09:30.895083] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:32.634 [2024-10-30 14:09:30.895160] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:22:32.634 [2024-10-30 14:09:30.895174] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:22:32.634 [2024-10-30 14:09:30.895236] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:32.634 [2024-10-30 14:09:30.895246] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:32.634 [2024-10-30 14:09:30.895472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.634 [2024-10-30 14:09:30.895487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ded40 with addr=10.0.0.2, port=4420 00:22:32.634 [2024-10-30 14:09:30.895496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ded40 is same with the state(6) to be set 00:22:32.634 [2024-10-30 14:09:30.895530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.895540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.895553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.895560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.895570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.895578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.895588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.895595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.895605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.895612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.895622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.895630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.895639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.895646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.895656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.895664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.895673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.895681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.895690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.895698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.895708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.895718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.895727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.895735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.895744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.895758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.895768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.895775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.895785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.895792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.895803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.895810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.895820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.895827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.895837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.634 [2024-10-30 14:09:30.895844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.634 [2024-10-30 14:09:30.895854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.895862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.895871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.895878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.895888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.895895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.895905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.895912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.895922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.895929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.895940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.895947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.895957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.895964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.895974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.895981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.895991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.895998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.896630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.896638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169bd50 is same with the state(6) to be set 00:22:32.635 [2024-10-30 14:09:30.897916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.897930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.897942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.897952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.897963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.897972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.897983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.897991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.898000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.898008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.898018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.898025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.898035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.898042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.898051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.898059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.898068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.898078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.898088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.898095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.898105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.898112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.898121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.635 [2024-10-30 14:09:30.898128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.635 [2024-10-30 14:09:30.898138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.898988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.898996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.899005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.899012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.899021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d090 is same with the state(6) to be set 00:22:32.636 [2024-10-30 14:09:30.900294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.900308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.900321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.900330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.900341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.900351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.636 [2024-10-30 14:09:30.900362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.636 [2024-10-30 14:09:30.900370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.900985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.900994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.901002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.901011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.901019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.901029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.901036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.901045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.901053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.901062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.901071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.901080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.901087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.901097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.901104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.901113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.901121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.901130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.901138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.901147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.901155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.901164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.901172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.901181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.901188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.901198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.901205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.901214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.901221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.901231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.901238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.901247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.901255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.901264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.901271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.901282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.901290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.901299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.637 [2024-10-30 14:09:30.901306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.637 [2024-10-30 14:09:30.901316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.901323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.901332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.901339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.901349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.901356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.901365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.901372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.901382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.901389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.901397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x185e6d0 is same with the state(6) to be set 00:22:32.638 [2024-10-30 14:09:30.902673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.902687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.902700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.902710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.902721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.902730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.902741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.902755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.902766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.902776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.902788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.902795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.902805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.902812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.902822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.902829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.902839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.902846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.902856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.902863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.902873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.902880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.902890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.902897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.902907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.902914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.902924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.902931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.902941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.902949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.902958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.902966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.902976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.902983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.902993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.638 [2024-10-30 14:09:30.903551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.638 [2024-10-30 14:09:30.903558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.903567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.903575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.903584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.903591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.903600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.903608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.903617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.903625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.903635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.903643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.903652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.903659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.903669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.903676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.903685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.903693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.903702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.903709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.903719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.903726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.903736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.903743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.903758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.903765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.903774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.903781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.903789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ea400 is same with the state(6) to be set 00:22:32.639 [2024-10-30 14:09:30.905340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.905985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.905992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.906002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.906009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.906018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.906026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.906035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.906044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.906054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.906061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.906071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.906078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.906087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.906095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.906104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.639 [2024-10-30 14:09:30.906116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.639 [2024-10-30 14:09:30.906125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.640 [2024-10-30 14:09:30.906133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.640 [2024-10-30 14:09:30.906142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.932 [2024-10-30 14:09:30.912089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.932 [2024-10-30 14:09:30.912141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.932 [2024-10-30 14:09:30.912151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.932 [2024-10-30 14:09:30.912161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.932 [2024-10-30 14:09:30.912169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.932 [2024-10-30 14:09:30.912179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.932 [2024-10-30 14:09:30.912187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.932 [2024-10-30 14:09:30.912196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.932 [2024-10-30 14:09:30.912203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.932 [2024-10-30 14:09:30.912213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.932 [2024-10-30 14:09:30.912221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.932 [2024-10-30 14:09:30.912230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.932 [2024-10-30 14:09:30.912237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.932 [2024-10-30 14:09:30.912257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.932 [2024-10-30 14:09:30.912265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.932 [2024-10-30 14:09:30.912274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.932 [2024-10-30 14:09:30.912281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.932 [2024-10-30 14:09:30.912291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.932 [2024-10-30 14:09:30.912298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.932 [2024-10-30 14:09:30.912307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.932 [2024-10-30 14:09:30.912315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.932 [2024-10-30 14:09:30.912325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.932 [2024-10-30 14:09:30.912332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.932 [2024-10-30 14:09:30.912341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1712a20 is same with the state(6) to be set 00:22:32.932 [2024-10-30 14:09:30.913645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.932 [2024-10-30 14:09:30.913662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.932 [2024-10-30 14:09:30.913678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.932 [2024-10-30 14:09:30.913688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.932 [2024-10-30 14:09:30.913699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.932 [2024-10-30 14:09:30.913708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.932 [2024-10-30 14:09:30.913719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.932 [2024-10-30 14:09:30.913729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.932 [2024-10-30 14:09:30.913740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.932 [2024-10-30 14:09:30.913753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.932 [2024-10-30 14:09:30.913763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.932 [2024-10-30 14:09:30.913771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.932 [2024-10-30 14:09:30.913780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.932 [2024-10-30 14:09:30.913788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.932 [2024-10-30 14:09:30.913798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.932 [2024-10-30 14:09:30.913809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.932 [2024-10-30 14:09:30.913818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.913825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.913835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.913842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.913853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.913860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.913870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.913877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.913887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.913894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.913903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.913911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.913920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.913928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.913937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.913944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.913953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.913961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.913970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.913978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.913987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.913995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.933 [2024-10-30 14:09:30.914518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.933 [2024-10-30 14:09:30.914528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.914535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.914544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.914551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.914560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.914567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.914577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.914584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.914593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.914600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.914610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.914617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.914626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.914634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.914643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.914651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.914661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.914668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.914677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.914684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.914694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.914701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.914711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.914718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.914727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.914734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.914744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.914759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.914767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713fa0 is same with the state(6) to be set 00:22:32.934 [2024-10-30 14:09:30.916040] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:32.934 [2024-10-30 14:09:30.916056] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:32.934 [2024-10-30 14:09:30.916066] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:32.934 [2024-10-30 14:09:30.916110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ded40 (9): Bad file descriptor 00:22:32.934 [2024-10-30 14:09:30.916161] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:32.934 [2024-10-30 14:09:30.916175] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:22:32.934 [2024-10-30 14:09:30.916187] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:22:32.934 [2024-10-30 14:09:30.916200] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:22:32.934 [2024-10-30 14:09:30.916223] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:22:32.934 [2024-10-30 14:09:30.916316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.916327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.916342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.916350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.916360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.916367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.916377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.916384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.916394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.916402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.916411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.916418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.916428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.916435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.916444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.916452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.916461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.916468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.916478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.916485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.916495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.916502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.916511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.916519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.916528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.916536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.916546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.916554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.916563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.916571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.916580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.916588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.916597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.916604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.916614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.916621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.916630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.916637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.916647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.916654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.916664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.916671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.916682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.916689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.934 [2024-10-30 14:09:30.916698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.934 [2024-10-30 14:09:30.916705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.916715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.916722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.916732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.916739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.916754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.916761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.916771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.916780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.916789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.916796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.916806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.916814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.916823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.916830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.916839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.916846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.916856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.916864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.916873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.916880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.916890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.916897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.916907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.916914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.916924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.916931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.916941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.916948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.916957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.916965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.916974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.916981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.916992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.917000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.917009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.917017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.917026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.917033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.917043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.917050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.917059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.917067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.917076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.917084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.917093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.917100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.917110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.917117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.917126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.917134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.917143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.917150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.917160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.917167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.917176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.917184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.917193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.917203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.917212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.917219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.917229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.917236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.917246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.917253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.917262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.917269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.917279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.917286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.917296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.917303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.917313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.917320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.917329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.917337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.917347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.917354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.917363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.917371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.935 [2024-10-30 14:09:30.917380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.935 [2024-10-30 14:09:30.917388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.936 [2024-10-30 14:09:30.917397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.936 [2024-10-30 14:09:30.917404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.936 [2024-10-30 14:09:30.917413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ef990 is same with the state(6) to be set 00:22:32.936 [2024-10-30 14:09:30.918937] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:32.936 [2024-10-30 14:09:30.918960] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:32.936 [2024-10-30 14:09:30.918970] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:32.936 00:22:32.936 Latency(us) 00:22:32.936 [2024-10-30T13:09:31.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.936 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.936 Job: Nvme1n1 ended in about 1.02 seconds with error 00:22:32.936 Verification LBA range: start 0x0 length 0x400 00:22:32.936 Nvme1n1 : 1.02 191.88 11.99 62.66 0.00 248722.56 5488.64 251658.24 00:22:32.936 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.936 Job: Nvme2n1 ended in about 1.02 seconds with error 00:22:32.936 Verification LBA range: start 0x0 length 0x400 00:22:32.936 Nvme2n1 : 1.02 125.02 7.81 62.51 0.00 331225.88 21080.75 255153.49 00:22:32.936 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.936 Job: Nvme3n1 ended in about 1.03 seconds with error 00:22:32.936 Verification LBA range: start 0x0 length 0x400 00:22:32.936 Nvme3n1 : 1.03 191.00 11.94 62.37 0.00 240352.44 18896.21 242920.11 00:22:32.936 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.936 Job: Nvme4n1 ended in about 1.03 seconds with error 00:22:32.936 Verification LBA range: start 0x0 length 0x400 00:22:32.936 Nvme4n1 : 1.03 186.67 11.67 62.22 0.00 239923.63 17803.95 244667.73 00:22:32.936 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.936 Job: Nvme5n1 ended in about 1.00 seconds with error 00:22:32.936 Verification LBA range: start 0x0 length 0x400 00:22:32.936 Nvme5n1 : 1.00 192.86 12.05 64.29 0.00 226871.25 8519.68 251658.24 00:22:32.936 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.936 Job: Nvme6n1 ended in about 1.00 seconds with error 00:22:32.936 Verification LBA range: start 0x0 length 0x400 00:22:32.936 Nvme6n1 : 1.00 192.64 12.04 64.21 0.00 222257.39 11304.96 253405.87 00:22:32.936 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.936 Job: Nvme7n1 ended in about 1.01 seconds with error 00:22:32.936 Verification LBA range: start 0x0 length 0x400 00:22:32.936 Nvme7n1 : 1.01 194.36 12.15 63.14 0.00 217338.05 21408.43 239424.85 00:22:32.936 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.936 Job: Nvme8n1 ended in about 1.04 seconds with error 00:22:32.936 Verification LBA range: start 0x0 length 0x400 00:22:32.936 Nvme8n1 : 1.04 184.23 11.51 61.41 0.00 223855.57 15400.96 248162.99 00:22:32.936 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.936 Job: Nvme9n1 ended in about 1.04 seconds with error 00:22:32.936 Verification LBA range: start 0x0 length 0x400 00:22:32.936 Nvme9n1 : 1.04 190.92 11.93 55.93 0.00 216831.36 15291.73 232434.35 00:22:32.936 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.936 Job: Nvme10n1 ended in about 1.04 seconds with error 00:22:32.936 Verification LBA range: start 0x0 length 0x400 00:22:32.936 Nvme10n1 : 1.04 123.13 7.70 61.57 0.00 284753.35 23483.73 269134.51 00:22:32.936 [2024-10-30T13:09:31.235Z] =================================================================================================================== 00:22:32.936 [2024-10-30T13:09:31.235Z] Total : 1772.72 110.79 620.30 0.00 241867.48 5488.64 269134.51 00:22:32.936 [2024-10-30 14:09:30.944981] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:32.936 [2024-10-30 14:09:30.945395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.936 [2024-10-30 14:09:30.945419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e8eb0 with addr=10.0.0.2, port=4420 00:22:32.936 [2024-10-30 14:09:30.945430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8eb0 is same with the state(6) to be set 00:22:32.936 [2024-10-30 14:09:30.945635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.936 [2024-10-30 14:09:30.945646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e8a50 with addr=10.0.0.2, port=4420 00:22:32.936 [2024-10-30 14:09:30.945654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a50 is same with the state(6) to be set 00:22:32.936 [2024-10-30 14:09:30.945944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.936 [2024-10-30 14:09:30.945955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e68b0 with addr=10.0.0.2, port=4420 00:22:32.936 [2024-10-30 14:09:30.945962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e68b0 is same with the state(6) to be set 00:22:32.936 [2024-10-30 14:09:30.945971] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:32.936 [2024-10-30 14:09:30.945978] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:32.936 [2024-10-30 14:09:30.945988] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:32.936 [2024-10-30 14:09:30.947620] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:32.936 [2024-10-30 14:09:30.947637] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:32.936 [2024-10-30 14:09:30.947648] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:32.936 [2024-10-30 14:09:30.947668] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:32.936 [2024-10-30 14:09:30.948017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.936 [2024-10-30 14:09:30.948032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e0060 with addr=10.0.0.2, port=4420 00:22:32.936 [2024-10-30 14:09:30.948039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e0060 is same with the state(6) to be set 00:22:32.936 [2024-10-30 14:09:30.948449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.936 [2024-10-30 14:09:30.948460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1623750 with addr=10.0.0.2, port=4420 00:22:32.936 [2024-10-30 14:09:30.948468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623750 is same with the state(6) to be set 00:22:32.936 [2024-10-30 14:09:30.948813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.936 [2024-10-30 14:09:30.948824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1655d30 with addr=10.0.0.2, port=4420 00:22:32.936 [2024-10-30 14:09:30.948831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1655d30 is same with the state(6) to be set 00:22:32.936 [2024-10-30 14:09:30.948844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e8eb0 (9): Bad file descriptor 00:22:32.936 [2024-10-30 14:09:30.948857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e8a50 (9): Bad file descriptor 00:22:32.936 [2024-10-30 14:09:30.948866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e68b0 (9): Bad file descriptor 00:22:32.936 [2024-10-30 14:09:30.948907] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:22:32.936 [2024-10-30 14:09:30.948919] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:22:32.936 [2024-10-30 14:09:30.948936] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:22:32.936 [2024-10-30 14:09:30.949446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.936 [2024-10-30 14:09:30.949461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160c750 with addr=10.0.0.2, port=4420 00:22:32.936 [2024-10-30 14:09:30.949468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c750 is same with the state(6) to be set 00:22:32.936 [2024-10-30 14:09:30.949756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.936 [2024-10-30 14:09:30.949766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e71c0 with addr=10.0.0.2, port=4420 00:22:32.936 [2024-10-30 14:09:30.949774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e71c0 is same with the state(6) to be set 00:22:32.937 [2024-10-30 14:09:30.949966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.937 [2024-10-30 14:09:30.949976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160d2b0 with addr=10.0.0.2, port=4420 00:22:32.937 [2024-10-30 14:09:30.949983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160d2b0 is same with the state(6) to be set 00:22:32.937 [2024-10-30 14:09:30.949992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e0060 (9): Bad file descriptor 00:22:32.937 [2024-10-30 14:09:30.950002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1623750 (9): Bad file descriptor 00:22:32.937 [2024-10-30 14:09:30.950012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1655d30 (9): Bad file descriptor 00:22:32.937 [2024-10-30 14:09:30.950020] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:32.937 [2024-10-30 14:09:30.950027] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:32.937 [2024-10-30 14:09:30.950035] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:32.937 [2024-10-30 14:09:30.950047] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:32.937 [2024-10-30 14:09:30.950054] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:32.937 [2024-10-30 14:09:30.950060] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:32.937 [2024-10-30 14:09:30.950072] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:32.937 [2024-10-30 14:09:30.950078] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:32.937 [2024-10-30 14:09:30.950085] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:32.937 [2024-10-30 14:09:30.950163] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:32.937 [2024-10-30 14:09:30.950175] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:32.937 [2024-10-30 14:09:30.950183] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:32.937 [2024-10-30 14:09:30.950190] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:32.937 [2024-10-30 14:09:30.950203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x160c750 (9): Bad file descriptor 00:22:32.937 [2024-10-30 14:09:30.950213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e71c0 (9): Bad file descriptor 00:22:32.937 [2024-10-30 14:09:30.950227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x160d2b0 (9): Bad file descriptor 00:22:32.937 [2024-10-30 14:09:30.950235] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:32.937 [2024-10-30 14:09:30.950242] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:32.937 [2024-10-30 14:09:30.950249] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:32.937 [2024-10-30 14:09:30.950258] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:32.937 [2024-10-30 14:09:30.950265] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:32.937 [2024-10-30 14:09:30.950272] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:32.937 [2024-10-30 14:09:30.950282] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:32.937 [2024-10-30 14:09:30.950289] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:32.937 [2024-10-30 14:09:30.950296] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:32.937 [2024-10-30 14:09:30.950331] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:32.937 [2024-10-30 14:09:30.950340] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:32.937 [2024-10-30 14:09:30.950348] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:32.937 [2024-10-30 14:09:30.950513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.937 [2024-10-30 14:09:30.950524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ded40 with addr=10.0.0.2, port=4420 00:22:32.937 [2024-10-30 14:09:30.950532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ded40 is same with the state(6) to be set 00:22:32.937 [2024-10-30 14:09:30.950539] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:32.937 [2024-10-30 14:09:30.950546] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:32.937 [2024-10-30 14:09:30.950553] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:32.937 [2024-10-30 14:09:30.950563] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:32.937 [2024-10-30 14:09:30.950570] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:32.937 [2024-10-30 14:09:30.950577] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:32.937 [2024-10-30 14:09:30.950586] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:32.937 [2024-10-30 14:09:30.950593] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:32.937 [2024-10-30 14:09:30.950600] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:32.937 [2024-10-30 14:09:30.950629] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:32.937 [2024-10-30 14:09:30.950637] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:32.937 [2024-10-30 14:09:30.950644] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:32.937 [2024-10-30 14:09:30.950653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ded40 (9): Bad file descriptor 00:22:32.937 [2024-10-30 14:09:30.950681] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:32.937 [2024-10-30 14:09:30.950689] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:32.937 [2024-10-30 14:09:30.950696] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:32.937 [2024-10-30 14:09:30.950724] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:32.937 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1093871 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1093871 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1093871 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:33.879 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:33.879 rmmod nvme_tcp 00:22:33.879 rmmod nvme_fabrics 00:22:33.879 rmmod nvme_keyring 00:22:34.141 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:34.141 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:34.141 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:34.141 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1093631 ']' 00:22:34.141 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1093631 00:22:34.141 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1093631 ']' 00:22:34.141 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1093631 00:22:34.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1093631) - No such process 00:22:34.141 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1093631 is not found' 00:22:34.141 Process with pid 1093631 is not found 00:22:34.141 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:34.141 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:34.141 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:34.141 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:34.141 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:34.141 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:34.141 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:34.141 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:34.141 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:34.141 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.141 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.141 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.056 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:36.056 00:22:36.056 real 0m7.665s 00:22:36.056 user 0m18.429s 00:22:36.056 sys 0m1.257s 00:22:36.056 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:36.056 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:36.056 ************************************ 00:22:36.056 END TEST nvmf_shutdown_tc3 00:22:36.056 ************************************ 00:22:36.056 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:36.056 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:36.056 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:36.056 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:36.056 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:36.056 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:36.318 ************************************ 00:22:36.318 START TEST nvmf_shutdown_tc4 00:22:36.318 ************************************ 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:36.318 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:36.318 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:36.318 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:36.319 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:36.319 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:36.319 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:36.580 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:36.580 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:36.580 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:36.580 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:36.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:36.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:22:36.580 00:22:36.580 --- 10.0.0.2 ping statistics --- 00:22:36.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.580 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:22:36.580 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:36.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:36.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:22:36.580 00:22:36.580 --- 10.0.0.1 ping statistics --- 00:22:36.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.580 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:22:36.580 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:36.580 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:36.580 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:36.580 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:36.580 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:36.580 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:36.580 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:36.580 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:36.580 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:36.580 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:36.580 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:36.580 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:36.581 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:36.581 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1095168 00:22:36.581 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1095168 00:22:36.581 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:36.581 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1095168 ']' 00:22:36.581 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.581 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:36.581 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.581 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:36.581 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:36.581 [2024-10-30 14:09:34.803109] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:22:36.581 [2024-10-30 14:09:34.803189] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.841 [2024-10-30 14:09:34.902104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:36.841 [2024-10-30 14:09:34.935774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.841 [2024-10-30 14:09:34.935805] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.841 [2024-10-30 14:09:34.935811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.841 [2024-10-30 14:09:34.935816] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.842 [2024-10-30 14:09:34.935821] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.842 [2024-10-30 14:09:34.937267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.842 [2024-10-30 14:09:34.937425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:36.842 [2024-10-30 14:09:34.937575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.842 [2024-10-30 14:09:34.937577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:37.412 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:37.412 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:22:37.412 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:37.412 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:37.412 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:37.412 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.412 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:37.413 [2024-10-30 14:09:35.655945] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:37.413 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:37.673 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:37.673 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:37.673 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:37.673 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.673 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:37.673 Malloc1 00:22:37.673 [2024-10-30 14:09:35.771480] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.673 Malloc2 00:22:37.673 Malloc3 00:22:37.673 Malloc4 00:22:37.673 Malloc5 00:22:37.673 Malloc6 00:22:37.933 Malloc7 00:22:37.933 Malloc8 00:22:37.933 Malloc9 00:22:37.933 Malloc10 00:22:37.933 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.933 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:37.933 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:37.933 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:37.933 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1095555 00:22:37.933 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:37.933 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:38.193 [2024-10-30 14:09:36.252157] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:43.480 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:43.480 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1095168 00:22:43.480 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1095168 ']' 00:22:43.480 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1095168 00:22:43.480 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:22:43.480 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:43.480 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1095168 00:22:43.480 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:43.480 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:43.480 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1095168' 00:22:43.480 killing process with pid 1095168 00:22:43.480 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1095168 00:22:43.480 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1095168 00:22:43.480 [2024-10-30 14:09:41.246493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcf0 is same with the state(6) to be set 00:22:43.480 [2024-10-30 14:09:41.246537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcf0 is same with the state(6) to be set 00:22:43.480 [2024-10-30 14:09:41.246543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcf0 is same with the state(6) to be set 00:22:43.480 [2024-10-30 14:09:41.246548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcf0 is same with the state(6) to be set 00:22:43.480 [2024-10-30 14:09:41.246553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcf0 is same with the state(6) to be set 00:22:43.480 [2024-10-30 14:09:41.246558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcf0 is same with the state(6) to be set 00:22:43.480 [2024-10-30 14:09:41.246563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcf0 is same with the state(6) to be set 00:22:43.480 [2024-10-30 14:09:41.246567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcf0 is same with the state(6) to be set 00:22:43.480 [2024-10-30 14:09:41.246572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcf0 is same with the state(6) to be set 00:22:43.480 [2024-10-30 14:09:41.246577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcf0 is same with the state(6) to be set 00:22:43.480 [2024-10-30 14:09:41.246635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9de1c0 is same with the state(6) to be set 00:22:43.480 [2024-10-30 14:09:41.246666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9de1c0 is same with the state(6) to be set 00:22:43.480 [2024-10-30 14:09:41.246950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9de690 is same with the state(6) to be set 00:22:43.480 [2024-10-30 14:09:41.246973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9de690 is same with the state(6) to be set 00:22:43.480 [2024-10-30 14:09:41.246980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9de690 is same with the state(6) to be set 00:22:43.480 [2024-10-30 14:09:41.246996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9de690 is same with the state(6) to be set 00:22:43.480 [2024-10-30 14:09:41.247001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9de690 is same with the state(6) to be set 00:22:43.480 [2024-10-30 14:09:41.247193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd820 is same with the state(6) to be set 00:22:43.480 [2024-10-30 14:09:41.247214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd820 is same with the state(6) to be set 00:22:43.480 [2024-10-30 14:09:41.247220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd820 is same with the state(6) to be set 00:22:43.480 [2024-10-30 14:09:41.247225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd820 is same with the state(6) to be set 00:22:43.480 [2024-10-30 14:09:41.247230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd820 is same with the state(6) to be set 00:22:43.480 [2024-10-30 14:09:41.247235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd820 is same with the state(6) to be set 00:22:43.480 [2024-10-30 14:09:41.247240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd820 is same with the state(6) to be set 00:22:43.480 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 [2024-10-30 14:09:41.248675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:43.481 NVMe io qpair process completion error 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 [2024-10-30 14:09:41.252845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:43.481 starting I/O failed: -6 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 [2024-10-30 14:09:41.253680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 [2024-10-30 14:09:41.253827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9df050 is same with the state(6) to be set 00:22:43.481 [2024-10-30 14:09:41.253845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9df050 is same with the state(6) to be set 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 [2024-10-30 14:09:41.253851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9df050 is same with the state(6) to be set 00:22:43.481 [2024-10-30 14:09:41.253855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9df050 is same with tstarting I/O failed: -6 00:22:43.481 he state(6) to be set 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 [2024-10-30 14:09:41.254109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9df520 is same with the state(6) to be set 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 [2024-10-30 14:09:41.254127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9df520 is same with the state(6) to be set 00:22:43.481 starting I/O failed: -6 00:22:43.481 [2024-10-30 14:09:41.254133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9df520 is same with the state(6) to be set 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 [2024-10-30 14:09:41.254138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9df520 is same with the state(6) to be set 00:22:43.481 [2024-10-30 14:09:41.254143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9df520 is same with the state(6) to be set 00:22:43.481 starting I/O failed: -6 00:22:43.481 [2024-10-30 14:09:41.254148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9df520 is same with the state(6) to be set 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 [2024-10-30 14:09:41.254323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9df9f0 is same with tstarting I/O failed: -6 00:22:43.481 he state(6) to be set 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 [2024-10-30 14:09:41.254342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9df9f0 is same with the state(6) to be set 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 [2024-10-30 14:09:41.254504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9deb80 is same with tWrite completed with error (sct=0, sc=8) 00:22:43.481 he state(6) to be set 00:22:43.481 starting I/O failed: -6 00:22:43.481 [2024-10-30 14:09:41.254522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9deb80 is same with the state(6) to be set 00:22:43.481 [2024-10-30 14:09:41.254528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9deb80 is same with the state(6) to be set 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 [2024-10-30 14:09:41.254533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9deb80 is same with the state(6) to be set 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 [2024-10-30 14:09:41.254620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.481 Write completed with error (sct=0, sc=8) 00:22:43.481 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 [2024-10-30 14:09:41.256051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:43.482 NVMe io qpair process completion error 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 [2024-10-30 14:09:41.257143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:43.482 starting I/O failed: -6 00:22:43.482 starting I/O failed: -6 00:22:43.482 starting I/O failed: -6 00:22:43.482 starting I/O failed: -6 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 [2024-10-30 14:09:41.258162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 [2024-10-30 14:09:41.259104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.482 Write completed with error (sct=0, sc=8) 00:22:43.482 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 [2024-10-30 14:09:41.260564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:43.483 NVMe io qpair process completion error 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 starting I/O failed: -6 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 [2024-10-30 14:09:41.262901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 [2024-10-30 14:09:41.263839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.483 starting I/O failed: -6 00:22:43.483 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 [2024-10-30 14:09:41.266049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:43.484 NVMe io qpair process completion error 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 [2024-10-30 14:09:41.267141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:43.484 starting I/O failed: -6 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 [2024-10-30 14:09:41.268105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:43.484 starting I/O failed: -6 00:22:43.484 starting I/O failed: -6 00:22:43.484 starting I/O failed: -6 00:22:43.484 starting I/O failed: -6 00:22:43.484 starting I/O failed: -6 00:22:43.484 starting I/O failed: -6 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 [2024-10-30 14:09:41.269277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.484 Write completed with error (sct=0, sc=8) 00:22:43.484 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 [2024-10-30 14:09:41.270933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:43.485 NVMe io qpair process completion error 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 [2024-10-30 14:09:41.272080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 [2024-10-30 14:09:41.273018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 [2024-10-30 14:09:41.273935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.485 starting I/O failed: -6 00:22:43.485 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 [2024-10-30 14:09:41.276727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:43.486 NVMe io qpair process completion error 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 [2024-10-30 14:09:41.278006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 [2024-10-30 14:09:41.278832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 [2024-10-30 14:09:41.279772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.486 Write completed with error (sct=0, sc=8) 00:22:43.486 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 [2024-10-30 14:09:41.281712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:43.487 NVMe io qpair process completion error 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 [2024-10-30 14:09:41.283605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 [2024-10-30 14:09:41.284544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.487 starting I/O failed: -6 00:22:43.487 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 [2024-10-30 14:09:41.285986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:43.488 NVMe io qpair process completion error 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 [2024-10-30 14:09:41.287332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 [2024-10-30 14:09:41.288171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 [2024-10-30 14:09:41.289140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.488 Write completed with error (sct=0, sc=8) 00:22:43.488 starting I/O failed: -6 00:22:43.489 [2024-10-30 14:09:41.291894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:43.489 NVMe io qpair process completion error 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 [2024-10-30 14:09:41.295366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 [2024-10-30 14:09:41.296219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 Write completed with error (sct=0, sc=8) 00:22:43.489 starting I/O failed: -6 00:22:43.490 [2024-10-30 14:09:41.297161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 Write completed with error (sct=0, sc=8) 00:22:43.490 starting I/O failed: -6 00:22:43.490 [2024-10-30 14:09:41.299050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:43.490 NVMe io qpair process completion error 00:22:43.490 Initializing NVMe Controllers 00:22:43.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:43.490 Controller IO queue size 128, less than required. 00:22:43.490 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:43.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:43.490 Controller IO queue size 128, less than required. 00:22:43.490 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:43.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:43.490 Controller IO queue size 128, less than required. 00:22:43.490 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:43.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:43.490 Controller IO queue size 128, less than required. 00:22:43.490 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:43.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:43.490 Controller IO queue size 128, less than required. 00:22:43.490 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:43.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:43.490 Controller IO queue size 128, less than required. 00:22:43.490 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:43.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:43.490 Controller IO queue size 128, less than required. 00:22:43.490 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:43.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:43.490 Controller IO queue size 128, less than required. 00:22:43.490 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:43.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:43.490 Controller IO queue size 128, less than required. 00:22:43.490 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:43.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:43.490 Controller IO queue size 128, less than required. 00:22:43.490 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:43.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:43.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:43.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:43.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:43.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:43.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:43.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:43.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:43.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:43.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:43.490 Initialization complete. Launching workers. 00:22:43.490 ======================================================== 00:22:43.490 Latency(us) 00:22:43.490 Device Information : IOPS MiB/s Average min max 00:22:43.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1942.76 83.48 65903.41 700.96 126922.01 00:22:43.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1894.17 81.39 67619.16 577.84 149716.04 00:22:43.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1929.12 82.89 66416.42 612.29 121121.27 00:22:43.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1897.58 81.54 67487.19 616.03 131847.07 00:22:43.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1867.10 80.23 68647.71 726.58 131152.61 00:22:43.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1922.73 82.62 66016.47 612.25 117018.12 00:22:43.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1875.62 80.59 67693.99 515.97 117607.41 00:22:43.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1907.17 81.95 66599.60 561.65 117778.27 00:22:43.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1899.92 81.64 66885.31 666.86 121748.80 00:22:43.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1859.85 79.92 68349.63 788.85 123997.43 00:22:43.490 ======================================================== 00:22:43.490 Total : 18996.00 816.23 67150.38 515.97 149716.04 00:22:43.490 00:22:43.490 [2024-10-30 14:09:41.303516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b890 is same with the state(6) to be set 00:22:43.490 [2024-10-30 14:09:41.303560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bbc0 is same with the state(6) to be set 00:22:43.490 [2024-10-30 14:09:41.303590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b040 is same with the state(6) to be set 00:22:43.490 [2024-10-30 14:09:41.303618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78c870 is same with the state(6) to be set 00:22:43.490 [2024-10-30 14:09:41.303646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b560 is same with the state(6) to be set 00:22:43.490 [2024-10-30 14:09:41.303674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ca50 is same with the state(6) to be set 00:22:43.490 [2024-10-30 14:09:41.303702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78a9e0 is same with the state(6) to be set 00:22:43.490 [2024-10-30 14:09:41.303733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cc30 is same with the state(6) to be set 00:22:43.490 [2024-10-30 14:09:41.303767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ad10 is same with the state(6) to be set 00:22:43.490 [2024-10-30 14:09:41.303796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78a6b0 is same with the state(6) to be set 00:22:43.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:43.490 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1095555 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1095555 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1095555 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:44.432 rmmod nvme_tcp 00:22:44.432 rmmod nvme_fabrics 00:22:44.432 rmmod nvme_keyring 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:44.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:44.433 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1095168 ']' 00:22:44.433 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1095168 00:22:44.433 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1095168 ']' 00:22:44.433 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1095168 00:22:44.433 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1095168) - No such process 00:22:44.433 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1095168 is not found' 00:22:44.433 Process with pid 1095168 is not found 00:22:44.433 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:44.433 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:44.433 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:44.433 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:44.433 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:44.433 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:44.433 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:44.433 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:44.433 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:44.433 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.433 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.433 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.979 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:46.979 00:22:46.979 real 0m10.287s 00:22:46.979 user 0m28.097s 00:22:46.979 sys 0m3.925s 00:22:46.979 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.979 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:46.979 ************************************ 00:22:46.979 END TEST nvmf_shutdown_tc4 00:22:46.979 ************************************ 00:22:46.979 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:46.979 00:22:46.979 real 0m43.319s 00:22:46.979 user 1m44.632s 00:22:46.979 sys 0m13.867s 00:22:46.979 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.979 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:46.979 ************************************ 00:22:46.979 END TEST nvmf_shutdown 00:22:46.979 ************************************ 00:22:46.979 14:09:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:22:46.979 00:22:46.979 real 12m50.717s 00:22:46.979 user 27m19.563s 00:22:46.979 sys 3m46.010s 00:22:46.979 14:09:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.979 14:09:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:46.979 ************************************ 00:22:46.979 END TEST nvmf_target_extra 00:22:46.979 ************************************ 00:22:46.979 14:09:44 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:46.979 14:09:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:46.979 14:09:44 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.979 14:09:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:46.979 ************************************ 00:22:46.979 START TEST nvmf_host 00:22:46.979 ************************************ 00:22:46.979 14:09:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:46.979 * Looking for test storage... 00:22:46.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:46.979 14:09:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:46.979 14:09:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:46.979 14:09:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:46.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.979 --rc genhtml_branch_coverage=1 00:22:46.979 --rc genhtml_function_coverage=1 00:22:46.979 --rc genhtml_legend=1 00:22:46.979 --rc geninfo_all_blocks=1 00:22:46.979 --rc geninfo_unexecuted_blocks=1 00:22:46.979 00:22:46.979 ' 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:46.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.979 --rc genhtml_branch_coverage=1 00:22:46.979 --rc genhtml_function_coverage=1 00:22:46.979 --rc genhtml_legend=1 00:22:46.979 --rc geninfo_all_blocks=1 00:22:46.979 --rc geninfo_unexecuted_blocks=1 00:22:46.979 00:22:46.979 ' 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:46.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.979 --rc genhtml_branch_coverage=1 00:22:46.979 --rc genhtml_function_coverage=1 00:22:46.979 --rc genhtml_legend=1 00:22:46.979 --rc geninfo_all_blocks=1 00:22:46.979 --rc geninfo_unexecuted_blocks=1 00:22:46.979 00:22:46.979 ' 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:46.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.979 --rc genhtml_branch_coverage=1 00:22:46.979 --rc genhtml_function_coverage=1 00:22:46.979 --rc genhtml_legend=1 00:22:46.979 --rc geninfo_all_blocks=1 00:22:46.979 --rc geninfo_unexecuted_blocks=1 00:22:46.979 00:22:46.979 ' 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.979 14:09:45 nvmf_tcp.nvmf_host -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:46.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.980 ************************************ 00:22:46.980 START TEST nvmf_multicontroller 00:22:46.980 ************************************ 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:46.980 * Looking for test storage... 00:22:46.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:22:46.980 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:47.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.242 --rc genhtml_branch_coverage=1 00:22:47.242 --rc genhtml_function_coverage=1 00:22:47.242 --rc genhtml_legend=1 00:22:47.242 --rc geninfo_all_blocks=1 00:22:47.242 --rc geninfo_unexecuted_blocks=1 00:22:47.242 00:22:47.242 ' 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:47.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.242 --rc genhtml_branch_coverage=1 00:22:47.242 --rc genhtml_function_coverage=1 00:22:47.242 --rc genhtml_legend=1 00:22:47.242 --rc geninfo_all_blocks=1 00:22:47.242 --rc geninfo_unexecuted_blocks=1 00:22:47.242 00:22:47.242 ' 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:47.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.242 --rc genhtml_branch_coverage=1 00:22:47.242 --rc genhtml_function_coverage=1 00:22:47.242 --rc genhtml_legend=1 00:22:47.242 --rc geninfo_all_blocks=1 00:22:47.242 --rc geninfo_unexecuted_blocks=1 00:22:47.242 00:22:47.242 ' 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:47.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.242 --rc genhtml_branch_coverage=1 00:22:47.242 --rc genhtml_function_coverage=1 00:22:47.242 --rc genhtml_legend=1 00:22:47.242 --rc geninfo_all_blocks=1 00:22:47.242 --rc geninfo_unexecuted_blocks=1 00:22:47.242 00:22:47.242 ' 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:47.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:47.242 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:47.243 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:47.243 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:47.243 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:47.243 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:47.243 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:47.243 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:47.243 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:47.243 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.243 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:47.243 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:47.243 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:47.243 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.243 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.243 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.243 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:47.243 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:47.243 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:47.243 14:09:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:55.387 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:55.387 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:55.387 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:55.387 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:55.387 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:55.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:22:55.388 00:22:55.388 --- 10.0.0.2 ping statistics --- 00:22:55.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.388 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:55.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:22:55.388 00:22:55.388 --- 10.0.0.1 ping statistics --- 00:22:55.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.388 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1100971 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1100971 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1100971 ']' 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.388 14:09:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.388 [2024-10-30 14:09:52.897829] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:22:55.388 [2024-10-30 14:09:52.897907] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.388 [2024-10-30 14:09:52.997863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:55.388 [2024-10-30 14:09:53.049101] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.388 [2024-10-30 14:09:53.049153] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.388 [2024-10-30 14:09:53.049162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.388 [2024-10-30 14:09:53.049169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.388 [2024-10-30 14:09:53.049175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.388 [2024-10-30 14:09:53.051256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.388 [2024-10-30 14:09:53.051417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.388 [2024-10-30 14:09:53.051418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.649 [2024-10-30 14:09:53.762130] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.649 Malloc0 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.649 [2024-10-30 14:09:53.834194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.649 [2024-10-30 14:09:53.846111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.649 Malloc1 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1101317 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1101317 /var/tmp/bdevperf.sock 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1101317 ']' 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.649 14:09:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.611 14:09:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.611 14:09:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:56.611 14:09:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:56.611 14:09:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.611 14:09:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.936 NVMe0n1 00:22:56.936 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.936 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:56.936 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.936 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:56.936 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.936 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.936 1 00:22:56.936 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:56.936 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:56.936 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:56.936 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:56.936 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:56.936 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:56.936 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:56.936 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:56.936 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.936 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.936 request: 00:22:56.936 { 00:22:56.936 "name": "NVMe0", 00:22:56.936 "trtype": "tcp", 00:22:56.936 "traddr": "10.0.0.2", 00:22:56.936 "adrfam": "ipv4", 00:22:56.936 "trsvcid": "4420", 00:22:56.936 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:56.936 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:56.936 "hostaddr": "10.0.0.1", 00:22:56.936 "prchk_reftag": false, 00:22:56.936 "prchk_guard": false, 00:22:56.936 "hdgst": false, 00:22:56.936 "ddgst": false, 00:22:56.936 "allow_unrecognized_csi": false, 00:22:56.936 "method": "bdev_nvme_attach_controller", 00:22:56.936 "req_id": 1 00:22:56.936 } 00:22:56.936 Got JSON-RPC error response 00:22:56.936 response: 00:22:56.936 { 00:22:56.936 "code": -114, 00:22:56.936 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:56.936 } 00:22:56.936 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:56.936 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:56.936 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:56.936 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:56.936 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:56.936 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:56.936 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:56.936 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.937 request: 00:22:56.937 { 00:22:56.937 "name": "NVMe0", 00:22:56.937 "trtype": "tcp", 00:22:56.937 "traddr": "10.0.0.2", 00:22:56.937 "adrfam": "ipv4", 00:22:56.937 "trsvcid": "4420", 00:22:56.937 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:56.937 "hostaddr": "10.0.0.1", 00:22:56.937 "prchk_reftag": false, 00:22:56.937 "prchk_guard": false, 00:22:56.937 "hdgst": false, 00:22:56.937 "ddgst": false, 00:22:56.937 "allow_unrecognized_csi": false, 00:22:56.937 "method": "bdev_nvme_attach_controller", 00:22:56.937 "req_id": 1 00:22:56.937 } 00:22:56.937 Got JSON-RPC error response 00:22:56.937 response: 00:22:56.937 { 00:22:56.937 "code": -114, 00:22:56.937 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:56.937 } 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.937 request: 00:22:56.937 { 00:22:56.937 "name": "NVMe0", 00:22:56.937 "trtype": "tcp", 00:22:56.937 "traddr": "10.0.0.2", 00:22:56.937 "adrfam": "ipv4", 00:22:56.937 "trsvcid": "4420", 00:22:56.937 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:56.937 "hostaddr": "10.0.0.1", 00:22:56.937 "prchk_reftag": false, 00:22:56.937 "prchk_guard": false, 00:22:56.937 "hdgst": false, 00:22:56.937 "ddgst": false, 00:22:56.937 "multipath": "disable", 00:22:56.937 "allow_unrecognized_csi": false, 00:22:56.937 "method": "bdev_nvme_attach_controller", 00:22:56.937 "req_id": 1 00:22:56.937 } 00:22:56.937 Got JSON-RPC error response 00:22:56.937 response: 00:22:56.937 { 00:22:56.937 "code": -114, 00:22:56.937 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:56.937 } 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.937 request: 00:22:56.937 { 00:22:56.937 "name": "NVMe0", 00:22:56.937 "trtype": "tcp", 00:22:56.937 "traddr": "10.0.0.2", 00:22:56.937 "adrfam": "ipv4", 00:22:56.937 "trsvcid": "4420", 00:22:56.937 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:56.937 "hostaddr": "10.0.0.1", 00:22:56.937 "prchk_reftag": false, 00:22:56.937 "prchk_guard": false, 00:22:56.937 "hdgst": false, 00:22:56.937 "ddgst": false, 00:22:56.937 "multipath": "failover", 00:22:56.937 "allow_unrecognized_csi": false, 00:22:56.937 "method": "bdev_nvme_attach_controller", 00:22:56.937 "req_id": 1 00:22:56.937 } 00:22:56.937 Got JSON-RPC error response 00:22:56.937 response: 00:22:56.937 { 00:22:56.937 "code": -114, 00:22:56.937 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:56.937 } 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.937 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.243 NVMe0n1 00:22:57.243 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.243 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:57.243 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.243 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.243 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.244 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:57.244 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.244 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.546 00:22:57.546 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.546 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:57.546 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.546 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:57.546 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.546 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.546 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:57.546 14:09:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:58.484 { 00:22:58.484 "results": [ 00:22:58.484 { 00:22:58.484 "job": "NVMe0n1", 00:22:58.484 "core_mask": "0x1", 00:22:58.484 "workload": "write", 00:22:58.484 "status": "finished", 00:22:58.484 "queue_depth": 128, 00:22:58.484 "io_size": 4096, 00:22:58.484 "runtime": 1.005493, 00:22:58.484 "iops": 28827.64972008756, 00:22:58.484 "mibps": 112.60800671909203, 00:22:58.484 "io_failed": 0, 00:22:58.484 "io_timeout": 0, 00:22:58.484 "avg_latency_us": 4429.924471123991, 00:22:58.484 "min_latency_us": 2020.6933333333334, 00:22:58.484 "max_latency_us": 7591.253333333333 00:22:58.484 } 00:22:58.484 ], 00:22:58.484 "core_count": 1 00:22:58.484 } 00:22:58.484 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:58.484 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.484 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.484 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.484 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:58.484 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1101317 00:22:58.484 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1101317 ']' 00:22:58.484 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1101317 00:22:58.484 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:58.484 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:58.484 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1101317 00:22:58.743 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:58.743 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:58.743 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1101317' 00:22:58.743 killing process with pid 1101317 00:22:58.743 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1101317 00:22:58.743 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1101317 00:22:58.743 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:58.743 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.743 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.743 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.743 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:58.743 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.743 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.743 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.743 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:58.743 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:58.743 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:58.743 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:58.743 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:22:58.743 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:22:58.743 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:58.743 [2024-10-30 14:09:53.978167] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:22:58.743 [2024-10-30 14:09:53.978246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1101317 ] 00:22:58.743 [2024-10-30 14:09:54.073562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.743 [2024-10-30 14:09:54.126906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.743 [2024-10-30 14:09:55.555626] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 5a2b43c3-3609-4eb0-9006-106f32489ffc already exists 00:22:58.743 [2024-10-30 14:09:55.555673] bdev.c:7836:bdev_register: *ERROR*: Unable to add uuid:5a2b43c3-3609-4eb0-9006-106f32489ffc alias for bdev NVMe1n1 00:22:58.743 [2024-10-30 14:09:55.555683] bdev_nvme.c:4604:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:58.743 Running I/O for 1 seconds... 00:22:58.743 28794.00 IOPS, 112.48 MiB/s 00:22:58.743 Latency(us) 00:22:58.743 [2024-10-30T13:09:57.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.743 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:58.744 NVMe0n1 : 1.01 28827.65 112.61 0.00 0.00 4429.92 2020.69 7591.25 00:22:58.744 [2024-10-30T13:09:57.043Z] =================================================================================================================== 00:22:58.744 [2024-10-30T13:09:57.043Z] Total : 28827.65 112.61 0.00 0.00 4429.92 2020.69 7591.25 00:22:58.744 Received shutdown signal, test time was about 1.000000 seconds 00:22:58.744 00:22:58.744 Latency(us) 00:22:58.744 [2024-10-30T13:09:57.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.744 [2024-10-30T13:09:57.043Z] =================================================================================================================== 00:22:58.744 [2024-10-30T13:09:57.043Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:58.744 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:58.744 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:58.744 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:58.744 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:58.744 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:58.744 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:58.744 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:58.744 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:58.744 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:58.744 14:09:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:58.744 rmmod nvme_tcp 00:22:58.744 rmmod nvme_fabrics 00:22:58.744 rmmod nvme_keyring 00:22:58.744 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:58.744 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:58.744 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:58.744 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1100971 ']' 00:22:58.744 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1100971 00:22:58.744 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1100971 ']' 00:22:58.744 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1100971 00:22:58.744 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:58.744 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:58.744 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1100971 00:22:59.003 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:59.003 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:59.003 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1100971' 00:22:59.003 killing process with pid 1100971 00:22:59.003 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1100971 00:22:59.003 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1100971 00:22:59.003 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:59.003 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:59.003 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:59.003 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:59.003 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:22:59.003 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:22:59.003 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:59.003 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:59.003 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:59.003 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.003 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.003 14:09:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:01.542 00:23:01.542 real 0m14.193s 00:23:01.542 user 0m17.929s 00:23:01.542 sys 0m6.552s 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.542 ************************************ 00:23:01.542 END TEST nvmf_multicontroller 00:23:01.542 ************************************ 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.542 ************************************ 00:23:01.542 START TEST nvmf_aer 00:23:01.542 ************************************ 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:01.542 * Looking for test storage... 00:23:01.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:01.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.542 --rc genhtml_branch_coverage=1 00:23:01.542 --rc genhtml_function_coverage=1 00:23:01.542 --rc genhtml_legend=1 00:23:01.542 --rc geninfo_all_blocks=1 00:23:01.542 --rc geninfo_unexecuted_blocks=1 00:23:01.542 00:23:01.542 ' 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:01.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.542 --rc genhtml_branch_coverage=1 00:23:01.542 --rc genhtml_function_coverage=1 00:23:01.542 --rc genhtml_legend=1 00:23:01.542 --rc geninfo_all_blocks=1 00:23:01.542 --rc geninfo_unexecuted_blocks=1 00:23:01.542 00:23:01.542 ' 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:01.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.542 --rc genhtml_branch_coverage=1 00:23:01.542 --rc genhtml_function_coverage=1 00:23:01.542 --rc genhtml_legend=1 00:23:01.542 --rc geninfo_all_blocks=1 00:23:01.542 --rc geninfo_unexecuted_blocks=1 00:23:01.542 00:23:01.542 ' 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:01.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.542 --rc genhtml_branch_coverage=1 00:23:01.542 --rc genhtml_function_coverage=1 00:23:01.542 --rc genhtml_legend=1 00:23:01.542 --rc geninfo_all_blocks=1 00:23:01.542 --rc geninfo_unexecuted_blocks=1 00:23:01.542 00:23:01.542 ' 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.542 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:01.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:01.543 14:09:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:09.689 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:09.689 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:09.689 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.689 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:09.690 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:09.690 14:10:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:09.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:23:09.690 00:23:09.690 --- 10.0.0.2 ping statistics --- 00:23:09.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.690 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:23:09.690 00:23:09.690 --- 10.0.0.1 ping statistics --- 00:23:09.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.690 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1106017 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1106017 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1106017 ']' 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.690 14:10:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.690 [2024-10-30 14:10:07.187069] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:23:09.690 [2024-10-30 14:10:07.187137] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.690 [2024-10-30 14:10:07.285132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:09.690 [2024-10-30 14:10:07.339179] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.690 [2024-10-30 14:10:07.339236] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.690 [2024-10-30 14:10:07.339245] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.690 [2024-10-30 14:10:07.339252] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.690 [2024-10-30 14:10:07.339258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.690 [2024-10-30 14:10:07.341321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.690 [2024-10-30 14:10:07.341487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.690 [2024-10-30 14:10:07.341644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.690 [2024-10-30 14:10:07.341644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.953 [2024-10-30 14:10:08.069739] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.953 Malloc0 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.953 [2024-10-30 14:10:08.152444] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.953 [ 00:23:09.953 { 00:23:09.953 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:09.953 "subtype": "Discovery", 00:23:09.953 "listen_addresses": [], 00:23:09.953 "allow_any_host": true, 00:23:09.953 "hosts": [] 00:23:09.953 }, 00:23:09.953 { 00:23:09.953 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.953 "subtype": "NVMe", 00:23:09.953 "listen_addresses": [ 00:23:09.953 { 00:23:09.953 "trtype": "TCP", 00:23:09.953 "adrfam": "IPv4", 00:23:09.953 "traddr": "10.0.0.2", 00:23:09.953 "trsvcid": "4420" 00:23:09.953 } 00:23:09.953 ], 00:23:09.953 "allow_any_host": true, 00:23:09.953 "hosts": [], 00:23:09.953 "serial_number": "SPDK00000000000001", 00:23:09.953 "model_number": "SPDK bdev Controller", 00:23:09.953 "max_namespaces": 2, 00:23:09.953 "min_cntlid": 1, 00:23:09.953 "max_cntlid": 65519, 00:23:09.953 "namespaces": [ 00:23:09.953 { 00:23:09.953 "nsid": 1, 00:23:09.953 "bdev_name": "Malloc0", 00:23:09.953 "name": "Malloc0", 00:23:09.953 "nguid": "FDDDA3D7B8E34DE39FFC695FEBBC3AB6", 00:23:09.953 "uuid": "fddda3d7-b8e3-4de3-9ffc-695febbc3ab6" 00:23:09.953 } 00:23:09.953 ] 00:23:09.953 } 00:23:09.953 ] 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:09.953 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:09.954 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1106353 00:23:09.954 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:09.954 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:09.954 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:09.954 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:09.954 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:09.954 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:09.954 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:10.215 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:10.215 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:10.215 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:10.215 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:10.215 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:10.215 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:23:10.215 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:23:10.215 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:10.215 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:10.215 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 3 -lt 200 ']' 00:23:10.215 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=4 00:23:10.215 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:10.477 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:10.477 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:10.477 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:10.477 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:10.477 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.477 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.477 Malloc1 00:23:10.477 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.477 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:10.477 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.477 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.477 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.477 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:10.477 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.478 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.478 Asynchronous Event Request test 00:23:10.478 Attaching to 10.0.0.2 00:23:10.478 Attached to 10.0.0.2 00:23:10.478 Registering asynchronous event callbacks... 00:23:10.478 Starting namespace attribute notice tests for all controllers... 00:23:10.478 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:10.478 aer_cb - Changed Namespace 00:23:10.478 Cleaning up... 00:23:10.478 [ 00:23:10.478 { 00:23:10.478 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:10.478 "subtype": "Discovery", 00:23:10.478 "listen_addresses": [], 00:23:10.478 "allow_any_host": true, 00:23:10.478 "hosts": [] 00:23:10.478 }, 00:23:10.478 { 00:23:10.478 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.478 "subtype": "NVMe", 00:23:10.478 "listen_addresses": [ 00:23:10.478 { 00:23:10.478 "trtype": "TCP", 00:23:10.478 "adrfam": "IPv4", 00:23:10.478 "traddr": "10.0.0.2", 00:23:10.478 "trsvcid": "4420" 00:23:10.478 } 00:23:10.478 ], 00:23:10.478 "allow_any_host": true, 00:23:10.478 "hosts": [], 00:23:10.478 "serial_number": "SPDK00000000000001", 00:23:10.478 "model_number": "SPDK bdev Controller", 00:23:10.478 "max_namespaces": 2, 00:23:10.478 "min_cntlid": 1, 00:23:10.478 "max_cntlid": 65519, 00:23:10.478 "namespaces": [ 00:23:10.478 { 00:23:10.478 "nsid": 1, 00:23:10.478 "bdev_name": "Malloc0", 00:23:10.478 "name": "Malloc0", 00:23:10.478 "nguid": "FDDDA3D7B8E34DE39FFC695FEBBC3AB6", 00:23:10.478 "uuid": "fddda3d7-b8e3-4de3-9ffc-695febbc3ab6" 00:23:10.478 }, 00:23:10.478 { 00:23:10.478 "nsid": 2, 00:23:10.478 "bdev_name": "Malloc1", 00:23:10.478 "name": "Malloc1", 00:23:10.478 "nguid": "0AD474E8E394492E9022DDBEC67F03AC", 00:23:10.478 "uuid": "0ad474e8-e394-492e-9022-ddbec67f03ac" 00:23:10.478 } 00:23:10.478 ] 00:23:10.478 } 00:23:10.478 ] 00:23:10.478 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.478 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1106353 00:23:10.478 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:10.478 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.478 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.478 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.478 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:10.478 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.478 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.478 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.478 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:10.478 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.478 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.478 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.478 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:10.478 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:10.478 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:10.478 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:10.478 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:10.478 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:10.478 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:10.478 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:10.478 rmmod nvme_tcp 00:23:10.740 rmmod nvme_fabrics 00:23:10.740 rmmod nvme_keyring 00:23:10.740 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:10.740 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:10.740 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:10.740 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1106017 ']' 00:23:10.740 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1106017 00:23:10.740 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1106017 ']' 00:23:10.740 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1106017 00:23:10.740 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:10.740 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.740 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1106017 00:23:10.740 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:10.740 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:10.740 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1106017' 00:23:10.740 killing process with pid 1106017 00:23:10.741 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1106017 00:23:10.741 14:10:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1106017 00:23:11.004 14:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:11.004 14:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:11.004 14:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:11.004 14:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:11.004 14:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:11.004 14:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:11.005 14:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:11.005 14:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:11.005 14:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:11.005 14:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.005 14:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.005 14:10:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.924 14:10:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:12.924 00:23:12.924 real 0m11.767s 00:23:12.924 user 0m9.067s 00:23:12.924 sys 0m6.238s 00:23:12.924 14:10:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:12.924 14:10:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.924 ************************************ 00:23:12.924 END TEST nvmf_aer 00:23:12.924 ************************************ 00:23:12.924 14:10:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:12.924 14:10:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:12.924 14:10:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:12.924 14:10:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.186 ************************************ 00:23:13.186 START TEST nvmf_async_init 00:23:13.186 ************************************ 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:13.186 * Looking for test storage... 00:23:13.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:13.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.186 --rc genhtml_branch_coverage=1 00:23:13.186 --rc genhtml_function_coverage=1 00:23:13.186 --rc genhtml_legend=1 00:23:13.186 --rc geninfo_all_blocks=1 00:23:13.186 --rc geninfo_unexecuted_blocks=1 00:23:13.186 00:23:13.186 ' 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:13.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.186 --rc genhtml_branch_coverage=1 00:23:13.186 --rc genhtml_function_coverage=1 00:23:13.186 --rc genhtml_legend=1 00:23:13.186 --rc geninfo_all_blocks=1 00:23:13.186 --rc geninfo_unexecuted_blocks=1 00:23:13.186 00:23:13.186 ' 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:13.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.186 --rc genhtml_branch_coverage=1 00:23:13.186 --rc genhtml_function_coverage=1 00:23:13.186 --rc genhtml_legend=1 00:23:13.186 --rc geninfo_all_blocks=1 00:23:13.186 --rc geninfo_unexecuted_blocks=1 00:23:13.186 00:23:13.186 ' 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:13.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.186 --rc genhtml_branch_coverage=1 00:23:13.186 --rc genhtml_function_coverage=1 00:23:13.186 --rc genhtml_legend=1 00:23:13.186 --rc geninfo_all_blocks=1 00:23:13.186 --rc geninfo_unexecuted_blocks=1 00:23:13.186 00:23:13.186 ' 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.186 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:13.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=f0393ceb960d4136b39536f9e3768e72 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.187 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.447 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:13.447 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:13.447 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:13.447 14:10:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:21.595 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:21.595 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:21.595 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:21.595 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:21.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:21.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:23:21.595 00:23:21.595 --- 10.0.0.2 ping statistics --- 00:23:21.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.595 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:21.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:21.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:23:21.595 00:23:21.595 --- 10.0.0.1 ping statistics --- 00:23:21.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.595 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:21.595 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:21.596 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:21.596 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:21.596 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:21.596 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:21.596 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:21.596 14:10:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:21.596 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:21.596 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:21.596 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:21.596 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.596 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1110651 00:23:21.596 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1110651 00:23:21.596 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:21.596 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1110651 ']' 00:23:21.596 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.596 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.596 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.596 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.596 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.596 [2024-10-30 14:10:19.084715] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:23:21.596 [2024-10-30 14:10:19.084790] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.596 [2024-10-30 14:10:19.181733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.596 [2024-10-30 14:10:19.233641] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.596 [2024-10-30 14:10:19.233692] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.596 [2024-10-30 14:10:19.233701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.596 [2024-10-30 14:10:19.233709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.596 [2024-10-30 14:10:19.233715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.596 [2024-10-30 14:10:19.234485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.856 [2024-10-30 14:10:19.948315] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.856 null0 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f0393ceb960d4136b39536f9e3768e72 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.856 14:10:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.856 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.856 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:21.856 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.856 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.856 [2024-10-30 14:10:20.008692] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.856 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.856 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:21.856 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.856 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.117 nvme0n1 00:23:22.117 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.117 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:22.117 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.117 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.117 [ 00:23:22.117 { 00:23:22.117 "name": "nvme0n1", 00:23:22.117 "aliases": [ 00:23:22.117 "f0393ceb-960d-4136-b395-36f9e3768e72" 00:23:22.117 ], 00:23:22.117 "product_name": "NVMe disk", 00:23:22.117 "block_size": 512, 00:23:22.117 "num_blocks": 2097152, 00:23:22.117 "uuid": "f0393ceb-960d-4136-b395-36f9e3768e72", 00:23:22.117 "numa_id": 0, 00:23:22.117 "assigned_rate_limits": { 00:23:22.117 "rw_ios_per_sec": 0, 00:23:22.117 "rw_mbytes_per_sec": 0, 00:23:22.117 "r_mbytes_per_sec": 0, 00:23:22.117 "w_mbytes_per_sec": 0 00:23:22.117 }, 00:23:22.117 "claimed": false, 00:23:22.117 "zoned": false, 00:23:22.117 "supported_io_types": { 00:23:22.117 "read": true, 00:23:22.117 "write": true, 00:23:22.117 "unmap": false, 00:23:22.117 "flush": true, 00:23:22.117 "reset": true, 00:23:22.117 "nvme_admin": true, 00:23:22.117 "nvme_io": true, 00:23:22.117 "nvme_io_md": false, 00:23:22.117 "write_zeroes": true, 00:23:22.117 "zcopy": false, 00:23:22.117 "get_zone_info": false, 00:23:22.117 "zone_management": false, 00:23:22.117 "zone_append": false, 00:23:22.117 "compare": true, 00:23:22.117 "compare_and_write": true, 00:23:22.117 "abort": true, 00:23:22.117 "seek_hole": false, 00:23:22.117 "seek_data": false, 00:23:22.117 "copy": true, 00:23:22.117 "nvme_iov_md": false 00:23:22.117 }, 00:23:22.117 "memory_domains": [ 00:23:22.117 { 00:23:22.117 "dma_device_id": "system", 00:23:22.117 "dma_device_type": 1 00:23:22.117 } 00:23:22.117 ], 00:23:22.117 "driver_specific": { 00:23:22.117 "nvme": [ 00:23:22.117 { 00:23:22.117 "trid": { 00:23:22.117 "trtype": "TCP", 00:23:22.117 "adrfam": "IPv4", 00:23:22.117 "traddr": "10.0.0.2", 00:23:22.117 "trsvcid": "4420", 00:23:22.117 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:22.117 }, 00:23:22.117 "ctrlr_data": { 00:23:22.117 "cntlid": 1, 00:23:22.117 "vendor_id": "0x8086", 00:23:22.117 "model_number": "SPDK bdev Controller", 00:23:22.117 "serial_number": "00000000000000000000", 00:23:22.117 "firmware_revision": "25.01", 00:23:22.117 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:22.117 "oacs": { 00:23:22.117 "security": 0, 00:23:22.117 "format": 0, 00:23:22.117 "firmware": 0, 00:23:22.117 "ns_manage": 0 00:23:22.117 }, 00:23:22.117 "multi_ctrlr": true, 00:23:22.117 "ana_reporting": false 00:23:22.117 }, 00:23:22.117 "vs": { 00:23:22.117 "nvme_version": "1.3" 00:23:22.117 }, 00:23:22.117 "ns_data": { 00:23:22.117 "id": 1, 00:23:22.117 "can_share": true 00:23:22.117 } 00:23:22.117 } 00:23:22.117 ], 00:23:22.117 "mp_policy": "active_passive" 00:23:22.117 } 00:23:22.117 } 00:23:22.117 ] 00:23:22.117 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.117 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:22.117 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.117 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.117 [2024-10-30 14:10:20.285177] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:22.117 [2024-10-30 14:10:20.285265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d3d40 (9): Bad file descriptor 00:23:22.379 [2024-10-30 14:10:20.416864] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:22.379 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.379 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:22.379 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.379 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.379 [ 00:23:22.379 { 00:23:22.379 "name": "nvme0n1", 00:23:22.379 "aliases": [ 00:23:22.379 "f0393ceb-960d-4136-b395-36f9e3768e72" 00:23:22.379 ], 00:23:22.379 "product_name": "NVMe disk", 00:23:22.379 "block_size": 512, 00:23:22.379 "num_blocks": 2097152, 00:23:22.379 "uuid": "f0393ceb-960d-4136-b395-36f9e3768e72", 00:23:22.379 "numa_id": 0, 00:23:22.379 "assigned_rate_limits": { 00:23:22.379 "rw_ios_per_sec": 0, 00:23:22.379 "rw_mbytes_per_sec": 0, 00:23:22.379 "r_mbytes_per_sec": 0, 00:23:22.379 "w_mbytes_per_sec": 0 00:23:22.379 }, 00:23:22.379 "claimed": false, 00:23:22.379 "zoned": false, 00:23:22.379 "supported_io_types": { 00:23:22.379 "read": true, 00:23:22.379 "write": true, 00:23:22.379 "unmap": false, 00:23:22.379 "flush": true, 00:23:22.379 "reset": true, 00:23:22.379 "nvme_admin": true, 00:23:22.379 "nvme_io": true, 00:23:22.379 "nvme_io_md": false, 00:23:22.379 "write_zeroes": true, 00:23:22.379 "zcopy": false, 00:23:22.379 "get_zone_info": false, 00:23:22.379 "zone_management": false, 00:23:22.379 "zone_append": false, 00:23:22.379 "compare": true, 00:23:22.379 "compare_and_write": true, 00:23:22.379 "abort": true, 00:23:22.379 "seek_hole": false, 00:23:22.379 "seek_data": false, 00:23:22.379 "copy": true, 00:23:22.379 "nvme_iov_md": false 00:23:22.379 }, 00:23:22.379 "memory_domains": [ 00:23:22.379 { 00:23:22.379 "dma_device_id": "system", 00:23:22.379 "dma_device_type": 1 00:23:22.379 } 00:23:22.379 ], 00:23:22.379 "driver_specific": { 00:23:22.379 "nvme": [ 00:23:22.379 { 00:23:22.379 "trid": { 00:23:22.379 "trtype": "TCP", 00:23:22.379 "adrfam": "IPv4", 00:23:22.379 "traddr": "10.0.0.2", 00:23:22.379 "trsvcid": "4420", 00:23:22.379 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:22.379 }, 00:23:22.379 "ctrlr_data": { 00:23:22.379 "cntlid": 2, 00:23:22.379 "vendor_id": "0x8086", 00:23:22.379 "model_number": "SPDK bdev Controller", 00:23:22.379 "serial_number": "00000000000000000000", 00:23:22.379 "firmware_revision": "25.01", 00:23:22.379 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:22.379 "oacs": { 00:23:22.379 "security": 0, 00:23:22.379 "format": 0, 00:23:22.379 "firmware": 0, 00:23:22.379 "ns_manage": 0 00:23:22.379 }, 00:23:22.379 "multi_ctrlr": true, 00:23:22.379 "ana_reporting": false 00:23:22.379 }, 00:23:22.379 "vs": { 00:23:22.379 "nvme_version": "1.3" 00:23:22.379 }, 00:23:22.379 "ns_data": { 00:23:22.379 "id": 1, 00:23:22.379 "can_share": true 00:23:22.379 } 00:23:22.379 } 00:23:22.379 ], 00:23:22.379 "mp_policy": "active_passive" 00:23:22.379 } 00:23:22.379 } 00:23:22.379 ] 00:23:22.379 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.379 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.379 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.379 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.379 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.379 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:22.379 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.RJladjWEuV 00:23:22.379 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:22.379 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.RJladjWEuV 00:23:22.379 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.RJladjWEuV 00:23:22.379 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.379 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.380 [2024-10-30 14:10:20.505861] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:22.380 [2024-10-30 14:10:20.506014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.380 [2024-10-30 14:10:20.529940] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:22.380 nvme0n1 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.380 [ 00:23:22.380 { 00:23:22.380 "name": "nvme0n1", 00:23:22.380 "aliases": [ 00:23:22.380 "f0393ceb-960d-4136-b395-36f9e3768e72" 00:23:22.380 ], 00:23:22.380 "product_name": "NVMe disk", 00:23:22.380 "block_size": 512, 00:23:22.380 "num_blocks": 2097152, 00:23:22.380 "uuid": "f0393ceb-960d-4136-b395-36f9e3768e72", 00:23:22.380 "numa_id": 0, 00:23:22.380 "assigned_rate_limits": { 00:23:22.380 "rw_ios_per_sec": 0, 00:23:22.380 "rw_mbytes_per_sec": 0, 00:23:22.380 "r_mbytes_per_sec": 0, 00:23:22.380 "w_mbytes_per_sec": 0 00:23:22.380 }, 00:23:22.380 "claimed": false, 00:23:22.380 "zoned": false, 00:23:22.380 "supported_io_types": { 00:23:22.380 "read": true, 00:23:22.380 "write": true, 00:23:22.380 "unmap": false, 00:23:22.380 "flush": true, 00:23:22.380 "reset": true, 00:23:22.380 "nvme_admin": true, 00:23:22.380 "nvme_io": true, 00:23:22.380 "nvme_io_md": false, 00:23:22.380 "write_zeroes": true, 00:23:22.380 "zcopy": false, 00:23:22.380 "get_zone_info": false, 00:23:22.380 "zone_management": false, 00:23:22.380 "zone_append": false, 00:23:22.380 "compare": true, 00:23:22.380 "compare_and_write": true, 00:23:22.380 "abort": true, 00:23:22.380 "seek_hole": false, 00:23:22.380 "seek_data": false, 00:23:22.380 "copy": true, 00:23:22.380 "nvme_iov_md": false 00:23:22.380 }, 00:23:22.380 "memory_domains": [ 00:23:22.380 { 00:23:22.380 "dma_device_id": "system", 00:23:22.380 "dma_device_type": 1 00:23:22.380 } 00:23:22.380 ], 00:23:22.380 "driver_specific": { 00:23:22.380 "nvme": [ 00:23:22.380 { 00:23:22.380 "trid": { 00:23:22.380 "trtype": "TCP", 00:23:22.380 "adrfam": "IPv4", 00:23:22.380 "traddr": "10.0.0.2", 00:23:22.380 "trsvcid": "4421", 00:23:22.380 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:22.380 }, 00:23:22.380 "ctrlr_data": { 00:23:22.380 "cntlid": 3, 00:23:22.380 "vendor_id": "0x8086", 00:23:22.380 "model_number": "SPDK bdev Controller", 00:23:22.380 "serial_number": "00000000000000000000", 00:23:22.380 "firmware_revision": "25.01", 00:23:22.380 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:22.380 "oacs": { 00:23:22.380 "security": 0, 00:23:22.380 "format": 0, 00:23:22.380 "firmware": 0, 00:23:22.380 "ns_manage": 0 00:23:22.380 }, 00:23:22.380 "multi_ctrlr": true, 00:23:22.380 "ana_reporting": false 00:23:22.380 }, 00:23:22.380 "vs": { 00:23:22.380 "nvme_version": "1.3" 00:23:22.380 }, 00:23:22.380 "ns_data": { 00:23:22.380 "id": 1, 00:23:22.380 "can_share": true 00:23:22.380 } 00:23:22.380 } 00:23:22.380 ], 00:23:22.380 "mp_policy": "active_passive" 00:23:22.380 } 00:23:22.380 } 00:23:22.380 ] 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.RJladjWEuV 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:22.380 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:22.380 rmmod nvme_tcp 00:23:22.380 rmmod nvme_fabrics 00:23:22.641 rmmod nvme_keyring 00:23:22.641 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:22.641 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:22.641 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:22.641 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1110651 ']' 00:23:22.641 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1110651 00:23:22.641 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1110651 ']' 00:23:22.641 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1110651 00:23:22.642 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:22.642 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:22.642 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1110651 00:23:22.642 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:22.642 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:22.642 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1110651' 00:23:22.642 killing process with pid 1110651 00:23:22.642 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1110651 00:23:22.642 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1110651 00:23:22.903 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:22.903 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:22.903 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:22.903 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:22.903 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:22.903 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:22.903 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:22.903 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:22.903 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:22.903 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.903 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.903 14:10:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.817 14:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:24.817 00:23:24.817 real 0m11.805s 00:23:24.817 user 0m4.279s 00:23:24.817 sys 0m6.085s 00:23:24.817 14:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:24.817 14:10:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.817 ************************************ 00:23:24.817 END TEST nvmf_async_init 00:23:24.817 ************************************ 00:23:24.817 14:10:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:24.817 14:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:24.817 14:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:24.817 14:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.078 ************************************ 00:23:25.078 START TEST dma 00:23:25.078 ************************************ 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:25.078 * Looking for test storage... 00:23:25.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:25.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.078 --rc genhtml_branch_coverage=1 00:23:25.078 --rc genhtml_function_coverage=1 00:23:25.078 --rc genhtml_legend=1 00:23:25.078 --rc geninfo_all_blocks=1 00:23:25.078 --rc geninfo_unexecuted_blocks=1 00:23:25.078 00:23:25.078 ' 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:25.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.078 --rc genhtml_branch_coverage=1 00:23:25.078 --rc genhtml_function_coverage=1 00:23:25.078 --rc genhtml_legend=1 00:23:25.078 --rc geninfo_all_blocks=1 00:23:25.078 --rc geninfo_unexecuted_blocks=1 00:23:25.078 00:23:25.078 ' 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:25.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.078 --rc genhtml_branch_coverage=1 00:23:25.078 --rc genhtml_function_coverage=1 00:23:25.078 --rc genhtml_legend=1 00:23:25.078 --rc geninfo_all_blocks=1 00:23:25.078 --rc geninfo_unexecuted_blocks=1 00:23:25.078 00:23:25.078 ' 00:23:25.078 14:10:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:25.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.079 --rc genhtml_branch_coverage=1 00:23:25.079 --rc genhtml_function_coverage=1 00:23:25.079 --rc genhtml_legend=1 00:23:25.079 --rc geninfo_all_blocks=1 00:23:25.079 --rc geninfo_unexecuted_blocks=1 00:23:25.079 00:23:25.079 ' 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:25.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:25.079 00:23:25.079 real 0m0.234s 00:23:25.079 user 0m0.141s 00:23:25.079 sys 0m0.109s 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:25.079 14:10:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:25.079 ************************************ 00:23:25.079 END TEST dma 00:23:25.079 ************************************ 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.342 ************************************ 00:23:25.342 START TEST nvmf_identify 00:23:25.342 ************************************ 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:25.342 * Looking for test storage... 00:23:25.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:25.342 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:25.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.343 --rc genhtml_branch_coverage=1 00:23:25.343 --rc genhtml_function_coverage=1 00:23:25.343 --rc genhtml_legend=1 00:23:25.343 --rc geninfo_all_blocks=1 00:23:25.343 --rc geninfo_unexecuted_blocks=1 00:23:25.343 00:23:25.343 ' 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:25.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.343 --rc genhtml_branch_coverage=1 00:23:25.343 --rc genhtml_function_coverage=1 00:23:25.343 --rc genhtml_legend=1 00:23:25.343 --rc geninfo_all_blocks=1 00:23:25.343 --rc geninfo_unexecuted_blocks=1 00:23:25.343 00:23:25.343 ' 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:25.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.343 --rc genhtml_branch_coverage=1 00:23:25.343 --rc genhtml_function_coverage=1 00:23:25.343 --rc genhtml_legend=1 00:23:25.343 --rc geninfo_all_blocks=1 00:23:25.343 --rc geninfo_unexecuted_blocks=1 00:23:25.343 00:23:25.343 ' 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:25.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.343 --rc genhtml_branch_coverage=1 00:23:25.343 --rc genhtml_function_coverage=1 00:23:25.343 --rc genhtml_legend=1 00:23:25.343 --rc geninfo_all_blocks=1 00:23:25.343 --rc geninfo_unexecuted_blocks=1 00:23:25.343 00:23:25.343 ' 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.343 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:25.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:25.605 14:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:33.746 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:33.746 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:33.746 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:33.746 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:33.746 14:10:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:33.746 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:33.746 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:33.746 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:33.746 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:33.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:23:33.746 00:23:33.746 --- 10.0.0.2 ping statistics --- 00:23:33.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.746 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:23:33.746 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:33.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:23:33.747 00:23:33.747 --- 10.0.0.1 ping statistics --- 00:23:33.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.747 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:23:33.747 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.747 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:23:33.747 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:33.747 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.747 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:33.747 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:33.747 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.747 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:33.747 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:33.747 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:33.747 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.747 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:33.747 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1115140 00:23:33.747 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:33.747 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:33.747 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1115140 00:23:33.747 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1115140 ']' 00:23:33.747 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.747 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.747 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.747 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.747 14:10:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:33.747 [2024-10-30 14:10:31.220511] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:23:33.747 [2024-10-30 14:10:31.220582] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.747 [2024-10-30 14:10:31.325095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:33.747 [2024-10-30 14:10:31.380623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.747 [2024-10-30 14:10:31.380681] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.747 [2024-10-30 14:10:31.380690] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.747 [2024-10-30 14:10:31.380697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.747 [2024-10-30 14:10:31.380703] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.747 [2024-10-30 14:10:31.383137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.747 [2024-10-30 14:10:31.383310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.747 [2024-10-30 14:10:31.383466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:33.747 [2024-10-30 14:10:31.383467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.008 [2024-10-30 14:10:32.051901] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.008 Malloc0 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.008 [2024-10-30 14:10:32.174825] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.008 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.008 [ 00:23:34.008 { 00:23:34.008 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:34.008 "subtype": "Discovery", 00:23:34.008 "listen_addresses": [ 00:23:34.008 { 00:23:34.008 "trtype": "TCP", 00:23:34.008 "adrfam": "IPv4", 00:23:34.008 "traddr": "10.0.0.2", 00:23:34.008 "trsvcid": "4420" 00:23:34.008 } 00:23:34.008 ], 00:23:34.008 "allow_any_host": true, 00:23:34.008 "hosts": [] 00:23:34.008 }, 00:23:34.009 { 00:23:34.009 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.009 "subtype": "NVMe", 00:23:34.009 "listen_addresses": [ 00:23:34.009 { 00:23:34.009 "trtype": "TCP", 00:23:34.009 "adrfam": "IPv4", 00:23:34.009 "traddr": "10.0.0.2", 00:23:34.009 "trsvcid": "4420" 00:23:34.009 } 00:23:34.009 ], 00:23:34.009 "allow_any_host": true, 00:23:34.009 "hosts": [], 00:23:34.009 "serial_number": "SPDK00000000000001", 00:23:34.009 "model_number": "SPDK bdev Controller", 00:23:34.009 "max_namespaces": 32, 00:23:34.009 "min_cntlid": 1, 00:23:34.009 "max_cntlid": 65519, 00:23:34.009 "namespaces": [ 00:23:34.009 { 00:23:34.009 "nsid": 1, 00:23:34.009 "bdev_name": "Malloc0", 00:23:34.009 "name": "Malloc0", 00:23:34.009 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:34.009 "eui64": "ABCDEF0123456789", 00:23:34.009 "uuid": "33673ee0-aa0a-46a2-867c-3a10f18c2e24" 00:23:34.009 } 00:23:34.009 ] 00:23:34.009 } 00:23:34.009 ] 00:23:34.009 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.009 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:34.009 [2024-10-30 14:10:32.239104] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:23:34.009 [2024-10-30 14:10:32.239151] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1115456 ] 00:23:34.009 [2024-10-30 14:10:32.293259] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:34.009 [2024-10-30 14:10:32.293337] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:34.009 [2024-10-30 14:10:32.293343] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:34.009 [2024-10-30 14:10:32.293360] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:34.009 [2024-10-30 14:10:32.293371] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:34.009 [2024-10-30 14:10:32.297177] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:34.009 [2024-10-30 14:10:32.297229] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd6f7e0 0 00:23:34.009 [2024-10-30 14:10:32.304764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:34.009 [2024-10-30 14:10:32.304784] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:34.009 [2024-10-30 14:10:32.304790] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:34.009 [2024-10-30 14:10:32.304794] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:34.009 [2024-10-30 14:10:32.304841] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.009 [2024-10-30 14:10:32.304848] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.009 [2024-10-30 14:10:32.304852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd6f7e0) 00:23:34.009 [2024-10-30 14:10:32.304871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:34.009 [2024-10-30 14:10:32.304897] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd1240, cid 0, qid 0 00:23:34.272 [2024-10-30 14:10:32.312762] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.272 [2024-10-30 14:10:32.312774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.272 [2024-10-30 14:10:32.312778] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.272 [2024-10-30 14:10:32.312783] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd1240) on tqpair=0xd6f7e0 00:23:34.272 [2024-10-30 14:10:32.312798] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:34.272 [2024-10-30 14:10:32.312808] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:34.272 [2024-10-30 14:10:32.312813] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:34.272 [2024-10-30 14:10:32.312833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.272 [2024-10-30 14:10:32.312838] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.272 [2024-10-30 14:10:32.312841] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd6f7e0) 00:23:34.272 [2024-10-30 14:10:32.312851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.272 [2024-10-30 14:10:32.312867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd1240, cid 0, qid 0 00:23:34.272 [2024-10-30 14:10:32.313109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.272 [2024-10-30 14:10:32.313116] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.272 [2024-10-30 14:10:32.313119] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.272 [2024-10-30 14:10:32.313129] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd1240) on tqpair=0xd6f7e0 00:23:34.272 [2024-10-30 14:10:32.313135] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:34.272 [2024-10-30 14:10:32.313143] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:34.272 [2024-10-30 14:10:32.313150] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.313154] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.313158] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd6f7e0) 00:23:34.273 [2024-10-30 14:10:32.313165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.273 [2024-10-30 14:10:32.313176] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd1240, cid 0, qid 0 00:23:34.273 [2024-10-30 14:10:32.313390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.273 [2024-10-30 14:10:32.313396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.273 [2024-10-30 14:10:32.313400] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.313403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd1240) on tqpair=0xd6f7e0 00:23:34.273 [2024-10-30 14:10:32.313409] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:34.273 [2024-10-30 14:10:32.313418] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:34.273 [2024-10-30 14:10:32.313425] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.313429] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.313432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd6f7e0) 00:23:34.273 [2024-10-30 14:10:32.313439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.273 [2024-10-30 14:10:32.313449] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd1240, cid 0, qid 0 00:23:34.273 [2024-10-30 14:10:32.313647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.273 [2024-10-30 14:10:32.313653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.273 [2024-10-30 14:10:32.313657] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.313661] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd1240) on tqpair=0xd6f7e0 00:23:34.273 [2024-10-30 14:10:32.313666] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:34.273 [2024-10-30 14:10:32.313679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.313683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.313687] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd6f7e0) 00:23:34.273 [2024-10-30 14:10:32.313694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.273 [2024-10-30 14:10:32.313704] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd1240, cid 0, qid 0 00:23:34.273 [2024-10-30 14:10:32.313915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.273 [2024-10-30 14:10:32.313921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.273 [2024-10-30 14:10:32.313925] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.313929] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd1240) on tqpair=0xd6f7e0 00:23:34.273 [2024-10-30 14:10:32.313934] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:34.273 [2024-10-30 14:10:32.313942] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:34.273 [2024-10-30 14:10:32.313950] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:34.273 [2024-10-30 14:10:32.314056] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:34.273 [2024-10-30 14:10:32.314061] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:34.273 [2024-10-30 14:10:32.314072] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.314075] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.314079] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd6f7e0) 00:23:34.273 [2024-10-30 14:10:32.314086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.273 [2024-10-30 14:10:32.314097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd1240, cid 0, qid 0 00:23:34.273 [2024-10-30 14:10:32.314310] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.273 [2024-10-30 14:10:32.314316] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.273 [2024-10-30 14:10:32.314320] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.314324] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd1240) on tqpair=0xd6f7e0 00:23:34.273 [2024-10-30 14:10:32.314329] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:34.273 [2024-10-30 14:10:32.314338] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.314342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.314346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd6f7e0) 00:23:34.273 [2024-10-30 14:10:32.314353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.273 [2024-10-30 14:10:32.314363] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd1240, cid 0, qid 0 00:23:34.273 [2024-10-30 14:10:32.314530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.273 [2024-10-30 14:10:32.314536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.273 [2024-10-30 14:10:32.314540] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.314544] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd1240) on tqpair=0xd6f7e0 00:23:34.273 [2024-10-30 14:10:32.314549] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:34.273 [2024-10-30 14:10:32.314554] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:34.273 [2024-10-30 14:10:32.314562] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:34.273 [2024-10-30 14:10:32.314571] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:34.273 [2024-10-30 14:10:32.314581] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.314584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd6f7e0) 00:23:34.273 [2024-10-30 14:10:32.314592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.273 [2024-10-30 14:10:32.314602] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd1240, cid 0, qid 0 00:23:34.273 [2024-10-30 14:10:32.314836] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:34.273 [2024-10-30 14:10:32.314844] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:34.273 [2024-10-30 14:10:32.314848] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.314852] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd6f7e0): datao=0, datal=4096, cccid=0 00:23:34.273 [2024-10-30 14:10:32.314858] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdd1240) on tqpair(0xd6f7e0): expected_datao=0, payload_size=4096 00:23:34.273 [2024-10-30 14:10:32.314863] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.314871] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.314876] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.359761] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.273 [2024-10-30 14:10:32.359774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.273 [2024-10-30 14:10:32.359778] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.359782] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd1240) on tqpair=0xd6f7e0 00:23:34.273 [2024-10-30 14:10:32.359792] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:34.273 [2024-10-30 14:10:32.359797] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:34.273 [2024-10-30 14:10:32.359802] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:34.273 [2024-10-30 14:10:32.359808] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:34.273 [2024-10-30 14:10:32.359813] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:34.273 [2024-10-30 14:10:32.359818] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:34.273 [2024-10-30 14:10:32.359828] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:34.273 [2024-10-30 14:10:32.359837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.359841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.359844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd6f7e0) 00:23:34.273 [2024-10-30 14:10:32.359853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:34.273 [2024-10-30 14:10:32.359867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd1240, cid 0, qid 0 00:23:34.273 [2024-10-30 14:10:32.360050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.273 [2024-10-30 14:10:32.360056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.273 [2024-10-30 14:10:32.360060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.360064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd1240) on tqpair=0xd6f7e0 00:23:34.273 [2024-10-30 14:10:32.360082] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.360086] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.360089] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd6f7e0) 00:23:34.273 [2024-10-30 14:10:32.360096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.273 [2024-10-30 14:10:32.360102] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.360106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.360113] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd6f7e0) 00:23:34.273 [2024-10-30 14:10:32.360120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.273 [2024-10-30 14:10:32.360126] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.360130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.360133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd6f7e0) 00:23:34.273 [2024-10-30 14:10:32.360139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.273 [2024-10-30 14:10:32.360147] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.360151] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.273 [2024-10-30 14:10:32.360154] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd6f7e0) 00:23:34.274 [2024-10-30 14:10:32.360160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.274 [2024-10-30 14:10:32.360165] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:34.274 [2024-10-30 14:10:32.360175] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:34.274 [2024-10-30 14:10:32.360181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.360185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd6f7e0) 00:23:34.274 [2024-10-30 14:10:32.360192] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.274 [2024-10-30 14:10:32.360205] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd1240, cid 0, qid 0 00:23:34.274 [2024-10-30 14:10:32.360210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd13c0, cid 1, qid 0 00:23:34.274 [2024-10-30 14:10:32.360215] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd1540, cid 2, qid 0 00:23:34.274 [2024-10-30 14:10:32.360220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd16c0, cid 3, qid 0 00:23:34.274 [2024-10-30 14:10:32.360225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd1840, cid 4, qid 0 00:23:34.274 [2024-10-30 14:10:32.360465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.274 [2024-10-30 14:10:32.360471] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.274 [2024-10-30 14:10:32.360475] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.360479] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd1840) on tqpair=0xd6f7e0 00:23:34.274 [2024-10-30 14:10:32.360488] nvme_ctrlr.c:3023:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:34.274 [2024-10-30 14:10:32.360494] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:34.274 [2024-10-30 14:10:32.360506] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.360510] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd6f7e0) 00:23:34.274 [2024-10-30 14:10:32.360516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.274 [2024-10-30 14:10:32.360527] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd1840, cid 4, qid 0 00:23:34.274 [2024-10-30 14:10:32.360766] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:34.274 [2024-10-30 14:10:32.360773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:34.274 [2024-10-30 14:10:32.360780] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.360784] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd6f7e0): datao=0, datal=4096, cccid=4 00:23:34.274 [2024-10-30 14:10:32.360789] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdd1840) on tqpair(0xd6f7e0): expected_datao=0, payload_size=4096 00:23:34.274 [2024-10-30 14:10:32.360793] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.360800] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.360804] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.360953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.274 [2024-10-30 14:10:32.360959] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.274 [2024-10-30 14:10:32.360963] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.360967] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd1840) on tqpair=0xd6f7e0 00:23:34.274 [2024-10-30 14:10:32.360982] nvme_ctrlr.c:4166:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:34.274 [2024-10-30 14:10:32.361015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.361019] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd6f7e0) 00:23:34.274 [2024-10-30 14:10:32.361026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.274 [2024-10-30 14:10:32.361033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.361037] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.361041] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd6f7e0) 00:23:34.274 [2024-10-30 14:10:32.361047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.274 [2024-10-30 14:10:32.361062] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd1840, cid 4, qid 0 00:23:34.274 [2024-10-30 14:10:32.361068] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd19c0, cid 5, qid 0 00:23:34.274 [2024-10-30 14:10:32.361310] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:34.274 [2024-10-30 14:10:32.361316] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:34.274 [2024-10-30 14:10:32.361319] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.361323] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd6f7e0): datao=0, datal=1024, cccid=4 00:23:34.274 [2024-10-30 14:10:32.361328] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdd1840) on tqpair(0xd6f7e0): expected_datao=0, payload_size=1024 00:23:34.274 [2024-10-30 14:10:32.361332] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.361339] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.361343] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.361349] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.274 [2024-10-30 14:10:32.361355] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.274 [2024-10-30 14:10:32.361358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.361362] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd19c0) on tqpair=0xd6f7e0 00:23:34.274 [2024-10-30 14:10:32.401922] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.274 [2024-10-30 14:10:32.401934] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.274 [2024-10-30 14:10:32.401938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.401943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd1840) on tqpair=0xd6f7e0 00:23:34.274 [2024-10-30 14:10:32.401958] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.401966] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd6f7e0) 00:23:34.274 [2024-10-30 14:10:32.401974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.274 [2024-10-30 14:10:32.401992] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd1840, cid 4, qid 0 00:23:34.274 [2024-10-30 14:10:32.402225] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:34.274 [2024-10-30 14:10:32.402232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:34.274 [2024-10-30 14:10:32.402235] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.402239] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd6f7e0): datao=0, datal=3072, cccid=4 00:23:34.274 [2024-10-30 14:10:32.402244] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdd1840) on tqpair(0xd6f7e0): expected_datao=0, payload_size=3072 00:23:34.274 [2024-10-30 14:10:32.402248] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.402265] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.402270] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.442945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.274 [2024-10-30 14:10:32.442955] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.274 [2024-10-30 14:10:32.442959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.442963] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd1840) on tqpair=0xd6f7e0 00:23:34.274 [2024-10-30 14:10:32.442975] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.442979] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd6f7e0) 00:23:34.274 [2024-10-30 14:10:32.442987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.274 [2024-10-30 14:10:32.443003] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd1840, cid 4, qid 0 00:23:34.274 [2024-10-30 14:10:32.443231] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:34.274 [2024-10-30 14:10:32.443238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:34.274 [2024-10-30 14:10:32.443241] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.443245] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd6f7e0): datao=0, datal=8, cccid=4 00:23:34.274 [2024-10-30 14:10:32.443250] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdd1840) on tqpair(0xd6f7e0): expected_datao=0, payload_size=8 00:23:34.274 [2024-10-30 14:10:32.443254] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.443261] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.443265] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.483922] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.274 [2024-10-30 14:10:32.483931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.274 [2024-10-30 14:10:32.483935] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.274 [2024-10-30 14:10:32.483939] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd1840) on tqpair=0xd6f7e0 00:23:34.274 ===================================================== 00:23:34.274 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:34.274 ===================================================== 00:23:34.274 Controller Capabilities/Features 00:23:34.274 ================================ 00:23:34.274 Vendor ID: 0000 00:23:34.274 Subsystem Vendor ID: 0000 00:23:34.274 Serial Number: .................... 00:23:34.274 Model Number: ........................................ 00:23:34.274 Firmware Version: 25.01 00:23:34.274 Recommended Arb Burst: 0 00:23:34.274 IEEE OUI Identifier: 00 00 00 00:23:34.274 Multi-path I/O 00:23:34.274 May have multiple subsystem ports: No 00:23:34.274 May have multiple controllers: No 00:23:34.274 Associated with SR-IOV VF: No 00:23:34.274 Max Data Transfer Size: 131072 00:23:34.274 Max Number of Namespaces: 0 00:23:34.274 Max Number of I/O Queues: 1024 00:23:34.274 NVMe Specification Version (VS): 1.3 00:23:34.274 NVMe Specification Version (Identify): 1.3 00:23:34.274 Maximum Queue Entries: 128 00:23:34.274 Contiguous Queues Required: Yes 00:23:34.274 Arbitration Mechanisms Supported 00:23:34.274 Weighted Round Robin: Not Supported 00:23:34.274 Vendor Specific: Not Supported 00:23:34.274 Reset Timeout: 15000 ms 00:23:34.274 Doorbell Stride: 4 bytes 00:23:34.274 NVM Subsystem Reset: Not Supported 00:23:34.274 Command Sets Supported 00:23:34.274 NVM Command Set: Supported 00:23:34.274 Boot Partition: Not Supported 00:23:34.274 Memory Page Size Minimum: 4096 bytes 00:23:34.274 Memory Page Size Maximum: 4096 bytes 00:23:34.275 Persistent Memory Region: Not Supported 00:23:34.275 Optional Asynchronous Events Supported 00:23:34.275 Namespace Attribute Notices: Not Supported 00:23:34.275 Firmware Activation Notices: Not Supported 00:23:34.275 ANA Change Notices: Not Supported 00:23:34.275 PLE Aggregate Log Change Notices: Not Supported 00:23:34.275 LBA Status Info Alert Notices: Not Supported 00:23:34.275 EGE Aggregate Log Change Notices: Not Supported 00:23:34.275 Normal NVM Subsystem Shutdown event: Not Supported 00:23:34.275 Zone Descriptor Change Notices: Not Supported 00:23:34.275 Discovery Log Change Notices: Supported 00:23:34.275 Controller Attributes 00:23:34.275 128-bit Host Identifier: Not Supported 00:23:34.275 Non-Operational Permissive Mode: Not Supported 00:23:34.275 NVM Sets: Not Supported 00:23:34.275 Read Recovery Levels: Not Supported 00:23:34.275 Endurance Groups: Not Supported 00:23:34.275 Predictable Latency Mode: Not Supported 00:23:34.275 Traffic Based Keep ALive: Not Supported 00:23:34.275 Namespace Granularity: Not Supported 00:23:34.275 SQ Associations: Not Supported 00:23:34.275 UUID List: Not Supported 00:23:34.275 Multi-Domain Subsystem: Not Supported 00:23:34.275 Fixed Capacity Management: Not Supported 00:23:34.275 Variable Capacity Management: Not Supported 00:23:34.275 Delete Endurance Group: Not Supported 00:23:34.275 Delete NVM Set: Not Supported 00:23:34.275 Extended LBA Formats Supported: Not Supported 00:23:34.275 Flexible Data Placement Supported: Not Supported 00:23:34.275 00:23:34.275 Controller Memory Buffer Support 00:23:34.275 ================================ 00:23:34.275 Supported: No 00:23:34.275 00:23:34.275 Persistent Memory Region Support 00:23:34.275 ================================ 00:23:34.275 Supported: No 00:23:34.275 00:23:34.275 Admin Command Set Attributes 00:23:34.275 ============================ 00:23:34.275 Security Send/Receive: Not Supported 00:23:34.275 Format NVM: Not Supported 00:23:34.275 Firmware Activate/Download: Not Supported 00:23:34.275 Namespace Management: Not Supported 00:23:34.275 Device Self-Test: Not Supported 00:23:34.275 Directives: Not Supported 00:23:34.275 NVMe-MI: Not Supported 00:23:34.275 Virtualization Management: Not Supported 00:23:34.275 Doorbell Buffer Config: Not Supported 00:23:34.275 Get LBA Status Capability: Not Supported 00:23:34.275 Command & Feature Lockdown Capability: Not Supported 00:23:34.275 Abort Command Limit: 1 00:23:34.275 Async Event Request Limit: 4 00:23:34.275 Number of Firmware Slots: N/A 00:23:34.275 Firmware Slot 1 Read-Only: N/A 00:23:34.275 Firmware Activation Without Reset: N/A 00:23:34.275 Multiple Update Detection Support: N/A 00:23:34.275 Firmware Update Granularity: No Information Provided 00:23:34.275 Per-Namespace SMART Log: No 00:23:34.275 Asymmetric Namespace Access Log Page: Not Supported 00:23:34.275 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:34.275 Command Effects Log Page: Not Supported 00:23:34.275 Get Log Page Extended Data: Supported 00:23:34.275 Telemetry Log Pages: Not Supported 00:23:34.275 Persistent Event Log Pages: Not Supported 00:23:34.275 Supported Log Pages Log Page: May Support 00:23:34.275 Commands Supported & Effects Log Page: Not Supported 00:23:34.275 Feature Identifiers & Effects Log Page:May Support 00:23:34.275 NVMe-MI Commands & Effects Log Page: May Support 00:23:34.275 Data Area 4 for Telemetry Log: Not Supported 00:23:34.275 Error Log Page Entries Supported: 128 00:23:34.275 Keep Alive: Not Supported 00:23:34.275 00:23:34.275 NVM Command Set Attributes 00:23:34.275 ========================== 00:23:34.275 Submission Queue Entry Size 00:23:34.275 Max: 1 00:23:34.275 Min: 1 00:23:34.275 Completion Queue Entry Size 00:23:34.275 Max: 1 00:23:34.275 Min: 1 00:23:34.275 Number of Namespaces: 0 00:23:34.275 Compare Command: Not Supported 00:23:34.275 Write Uncorrectable Command: Not Supported 00:23:34.275 Dataset Management Command: Not Supported 00:23:34.275 Write Zeroes Command: Not Supported 00:23:34.275 Set Features Save Field: Not Supported 00:23:34.275 Reservations: Not Supported 00:23:34.275 Timestamp: Not Supported 00:23:34.275 Copy: Not Supported 00:23:34.275 Volatile Write Cache: Not Present 00:23:34.275 Atomic Write Unit (Normal): 1 00:23:34.275 Atomic Write Unit (PFail): 1 00:23:34.275 Atomic Compare & Write Unit: 1 00:23:34.275 Fused Compare & Write: Supported 00:23:34.275 Scatter-Gather List 00:23:34.275 SGL Command Set: Supported 00:23:34.275 SGL Keyed: Supported 00:23:34.275 SGL Bit Bucket Descriptor: Not Supported 00:23:34.275 SGL Metadata Pointer: Not Supported 00:23:34.275 Oversized SGL: Not Supported 00:23:34.275 SGL Metadata Address: Not Supported 00:23:34.275 SGL Offset: Supported 00:23:34.275 Transport SGL Data Block: Not Supported 00:23:34.275 Replay Protected Memory Block: Not Supported 00:23:34.275 00:23:34.275 Firmware Slot Information 00:23:34.275 ========================= 00:23:34.275 Active slot: 0 00:23:34.275 00:23:34.275 00:23:34.275 Error Log 00:23:34.275 ========= 00:23:34.275 00:23:34.275 Active Namespaces 00:23:34.275 ================= 00:23:34.275 Discovery Log Page 00:23:34.275 ================== 00:23:34.275 Generation Counter: 2 00:23:34.275 Number of Records: 2 00:23:34.275 Record Format: 0 00:23:34.275 00:23:34.275 Discovery Log Entry 0 00:23:34.275 ---------------------- 00:23:34.275 Transport Type: 3 (TCP) 00:23:34.275 Address Family: 1 (IPv4) 00:23:34.275 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:34.275 Entry Flags: 00:23:34.275 Duplicate Returned Information: 1 00:23:34.275 Explicit Persistent Connection Support for Discovery: 1 00:23:34.275 Transport Requirements: 00:23:34.275 Secure Channel: Not Required 00:23:34.275 Port ID: 0 (0x0000) 00:23:34.275 Controller ID: 65535 (0xffff) 00:23:34.275 Admin Max SQ Size: 128 00:23:34.275 Transport Service Identifier: 4420 00:23:34.275 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:34.275 Transport Address: 10.0.0.2 00:23:34.275 Discovery Log Entry 1 00:23:34.275 ---------------------- 00:23:34.275 Transport Type: 3 (TCP) 00:23:34.275 Address Family: 1 (IPv4) 00:23:34.275 Subsystem Type: 2 (NVM Subsystem) 00:23:34.275 Entry Flags: 00:23:34.275 Duplicate Returned Information: 0 00:23:34.275 Explicit Persistent Connection Support for Discovery: 0 00:23:34.275 Transport Requirements: 00:23:34.275 Secure Channel: Not Required 00:23:34.275 Port ID: 0 (0x0000) 00:23:34.275 Controller ID: 65535 (0xffff) 00:23:34.275 Admin Max SQ Size: 128 00:23:34.275 Transport Service Identifier: 4420 00:23:34.275 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:34.275 Transport Address: 10.0.0.2 [2024-10-30 14:10:32.484046] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:34.275 [2024-10-30 14:10:32.484061] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd1240) on tqpair=0xd6f7e0 00:23:34.275 [2024-10-30 14:10:32.484068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.275 [2024-10-30 14:10:32.484074] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd13c0) on tqpair=0xd6f7e0 00:23:34.275 [2024-10-30 14:10:32.484079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.275 [2024-10-30 14:10:32.484086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd1540) on tqpair=0xd6f7e0 00:23:34.275 [2024-10-30 14:10:32.484091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.275 [2024-10-30 14:10:32.484096] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd16c0) on tqpair=0xd6f7e0 00:23:34.275 [2024-10-30 14:10:32.484101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.275 [2024-10-30 14:10:32.484111] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.275 [2024-10-30 14:10:32.484116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.275 [2024-10-30 14:10:32.484119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd6f7e0) 00:23:34.275 [2024-10-30 14:10:32.484128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.275 [2024-10-30 14:10:32.484144] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd16c0, cid 3, qid 0 00:23:34.275 [2024-10-30 14:10:32.484411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.275 [2024-10-30 14:10:32.484418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.275 [2024-10-30 14:10:32.484421] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.275 [2024-10-30 14:10:32.484425] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd16c0) on tqpair=0xd6f7e0 00:23:34.275 [2024-10-30 14:10:32.484436] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.275 [2024-10-30 14:10:32.484440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.275 [2024-10-30 14:10:32.484443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd6f7e0) 00:23:34.275 [2024-10-30 14:10:32.484450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.275 [2024-10-30 14:10:32.484464] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd16c0, cid 3, qid 0 00:23:34.275 [2024-10-30 14:10:32.484667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.275 [2024-10-30 14:10:32.484673] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.275 [2024-10-30 14:10:32.484677] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.275 [2024-10-30 14:10:32.484681] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd16c0) on tqpair=0xd6f7e0 00:23:34.276 [2024-10-30 14:10:32.484686] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:34.276 [2024-10-30 14:10:32.484691] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:34.276 [2024-10-30 14:10:32.484701] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.276 [2024-10-30 14:10:32.484705] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.276 [2024-10-30 14:10:32.484708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd6f7e0) 00:23:34.276 [2024-10-30 14:10:32.484715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.276 [2024-10-30 14:10:32.484726] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd16c0, cid 3, qid 0 00:23:34.276 [2024-10-30 14:10:32.484937] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.276 [2024-10-30 14:10:32.484944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.276 [2024-10-30 14:10:32.484947] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.276 [2024-10-30 14:10:32.484951] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd16c0) on tqpair=0xd6f7e0 00:23:34.276 [2024-10-30 14:10:32.484963] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.276 [2024-10-30 14:10:32.484967] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.276 [2024-10-30 14:10:32.484973] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd6f7e0) 00:23:34.276 [2024-10-30 14:10:32.484980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.276 [2024-10-30 14:10:32.484991] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd16c0, cid 3, qid 0 00:23:34.276 [2024-10-30 14:10:32.485180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.276 [2024-10-30 14:10:32.485186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.276 [2024-10-30 14:10:32.485189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.276 [2024-10-30 14:10:32.485193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd16c0) on tqpair=0xd6f7e0 00:23:34.276 [2024-10-30 14:10:32.485203] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.276 [2024-10-30 14:10:32.485207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.276 [2024-10-30 14:10:32.485211] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd6f7e0) 00:23:34.276 [2024-10-30 14:10:32.485218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.276 [2024-10-30 14:10:32.485228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd16c0, cid 3, qid 0 00:23:34.276 [2024-10-30 14:10:32.485428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.276 [2024-10-30 14:10:32.485434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.276 [2024-10-30 14:10:32.485437] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.276 [2024-10-30 14:10:32.485441] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd16c0) on tqpair=0xd6f7e0 00:23:34.276 [2024-10-30 14:10:32.485451] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.276 [2024-10-30 14:10:32.485455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.276 [2024-10-30 14:10:32.485459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd6f7e0) 00:23:34.276 [2024-10-30 14:10:32.485466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.276 [2024-10-30 14:10:32.485476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd16c0, cid 3, qid 0 00:23:34.276 [2024-10-30 14:10:32.485661] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.276 [2024-10-30 14:10:32.485668] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.276 [2024-10-30 14:10:32.485671] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.276 [2024-10-30 14:10:32.485675] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd16c0) on tqpair=0xd6f7e0 00:23:34.276 [2024-10-30 14:10:32.485685] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.276 [2024-10-30 14:10:32.485689] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.276 [2024-10-30 14:10:32.485692] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd6f7e0) 00:23:34.276 [2024-10-30 14:10:32.485699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.276 [2024-10-30 14:10:32.485709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd16c0, cid 3, qid 0 00:23:34.276 [2024-10-30 14:10:32.489754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.276 [2024-10-30 14:10:32.489762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.276 [2024-10-30 14:10:32.489765] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.276 [2024-10-30 14:10:32.489769] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdd16c0) on tqpair=0xd6f7e0 00:23:34.276 [2024-10-30 14:10:32.489778] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:23:34.276 00:23:34.276 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:34.276 [2024-10-30 14:10:32.535893] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:23:34.276 [2024-10-30 14:10:32.535939] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1115463 ] 00:23:34.541 [2024-10-30 14:10:32.593334] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:34.541 [2024-10-30 14:10:32.593402] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:34.541 [2024-10-30 14:10:32.593408] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:34.541 [2024-10-30 14:10:32.593423] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:34.541 [2024-10-30 14:10:32.593434] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:34.541 [2024-10-30 14:10:32.594125] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:34.541 [2024-10-30 14:10:32.594167] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb757e0 0 00:23:34.541 [2024-10-30 14:10:32.604761] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:34.541 [2024-10-30 14:10:32.604776] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:34.541 [2024-10-30 14:10:32.604781] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:34.541 [2024-10-30 14:10:32.604785] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:34.541 [2024-10-30 14:10:32.604823] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.541 [2024-10-30 14:10:32.604829] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.541 [2024-10-30 14:10:32.604833] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb757e0) 00:23:34.541 [2024-10-30 14:10:32.604848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:34.541 [2024-10-30 14:10:32.604870] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd7240, cid 0, qid 0 00:23:34.541 [2024-10-30 14:10:32.612759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.541 [2024-10-30 14:10:32.612769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.541 [2024-10-30 14:10:32.612773] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.541 [2024-10-30 14:10:32.612778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd7240) on tqpair=0xb757e0 00:23:34.541 [2024-10-30 14:10:32.612791] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:34.541 [2024-10-30 14:10:32.612799] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:34.541 [2024-10-30 14:10:32.612805] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:34.541 [2024-10-30 14:10:32.612820] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.541 [2024-10-30 14:10:32.612824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.541 [2024-10-30 14:10:32.612828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb757e0) 00:23:34.542 [2024-10-30 14:10:32.612837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.542 [2024-10-30 14:10:32.612853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd7240, cid 0, qid 0 00:23:34.542 [2024-10-30 14:10:32.613041] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.542 [2024-10-30 14:10:32.613052] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.542 [2024-10-30 14:10:32.613056] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.613060] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd7240) on tqpair=0xb757e0 00:23:34.542 [2024-10-30 14:10:32.613066] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:34.542 [2024-10-30 14:10:32.613073] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:34.542 [2024-10-30 14:10:32.613080] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.613084] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.613087] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb757e0) 00:23:34.542 [2024-10-30 14:10:32.613094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.542 [2024-10-30 14:10:32.613106] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd7240, cid 0, qid 0 00:23:34.542 [2024-10-30 14:10:32.613355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.542 [2024-10-30 14:10:32.613361] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.542 [2024-10-30 14:10:32.613364] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.613368] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd7240) on tqpair=0xb757e0 00:23:34.542 [2024-10-30 14:10:32.613374] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:34.542 [2024-10-30 14:10:32.613383] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:34.542 [2024-10-30 14:10:32.613390] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.613393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.613397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb757e0) 00:23:34.542 [2024-10-30 14:10:32.613404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.542 [2024-10-30 14:10:32.613415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd7240, cid 0, qid 0 00:23:34.542 [2024-10-30 14:10:32.613598] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.542 [2024-10-30 14:10:32.613604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.542 [2024-10-30 14:10:32.613608] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.613612] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd7240) on tqpair=0xb757e0 00:23:34.542 [2024-10-30 14:10:32.613617] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:34.542 [2024-10-30 14:10:32.613629] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.613634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.613637] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb757e0) 00:23:34.542 [2024-10-30 14:10:32.613644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.542 [2024-10-30 14:10:32.613654] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd7240, cid 0, qid 0 00:23:34.542 [2024-10-30 14:10:32.613838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.542 [2024-10-30 14:10:32.613845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.542 [2024-10-30 14:10:32.613849] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.613853] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd7240) on tqpair=0xb757e0 00:23:34.542 [2024-10-30 14:10:32.613859] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:34.542 [2024-10-30 14:10:32.613864] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:34.542 [2024-10-30 14:10:32.613872] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:34.542 [2024-10-30 14:10:32.613978] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:34.542 [2024-10-30 14:10:32.613983] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:34.542 [2024-10-30 14:10:32.613991] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.613995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.613999] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb757e0) 00:23:34.542 [2024-10-30 14:10:32.614005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.542 [2024-10-30 14:10:32.614017] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd7240, cid 0, qid 0 00:23:34.542 [2024-10-30 14:10:32.614222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.542 [2024-10-30 14:10:32.614228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.542 [2024-10-30 14:10:32.614232] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.614236] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd7240) on tqpair=0xb757e0 00:23:34.542 [2024-10-30 14:10:32.614240] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:34.542 [2024-10-30 14:10:32.614250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.614254] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.614258] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb757e0) 00:23:34.542 [2024-10-30 14:10:32.614265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.542 [2024-10-30 14:10:32.614276] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd7240, cid 0, qid 0 00:23:34.542 [2024-10-30 14:10:32.614446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.542 [2024-10-30 14:10:32.614452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.542 [2024-10-30 14:10:32.614455] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.614459] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd7240) on tqpair=0xb757e0 00:23:34.542 [2024-10-30 14:10:32.614464] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:34.542 [2024-10-30 14:10:32.614468] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:34.542 [2024-10-30 14:10:32.614476] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:34.542 [2024-10-30 14:10:32.614490] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:34.542 [2024-10-30 14:10:32.614500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.614503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb757e0) 00:23:34.542 [2024-10-30 14:10:32.614510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.542 [2024-10-30 14:10:32.614523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd7240, cid 0, qid 0 00:23:34.542 [2024-10-30 14:10:32.614778] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:34.542 [2024-10-30 14:10:32.614785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:34.542 [2024-10-30 14:10:32.614789] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.614793] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb757e0): datao=0, datal=4096, cccid=0 00:23:34.542 [2024-10-30 14:10:32.614798] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbd7240) on tqpair(0xb757e0): expected_datao=0, payload_size=4096 00:23:34.542 [2024-10-30 14:10:32.614802] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.614818] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.614822] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.655948] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.542 [2024-10-30 14:10:32.655962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.542 [2024-10-30 14:10:32.655965] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.655969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd7240) on tqpair=0xb757e0 00:23:34.542 [2024-10-30 14:10:32.655979] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:34.542 [2024-10-30 14:10:32.655985] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:34.542 [2024-10-30 14:10:32.655989] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:34.542 [2024-10-30 14:10:32.655994] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:34.542 [2024-10-30 14:10:32.655999] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:34.542 [2024-10-30 14:10:32.656004] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:34.542 [2024-10-30 14:10:32.656013] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:34.542 [2024-10-30 14:10:32.656021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.656025] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.656029] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb757e0) 00:23:34.542 [2024-10-30 14:10:32.656037] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:34.542 [2024-10-30 14:10:32.656049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd7240, cid 0, qid 0 00:23:34.542 [2024-10-30 14:10:32.656248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.542 [2024-10-30 14:10:32.656254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.542 [2024-10-30 14:10:32.656257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.656261] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd7240) on tqpair=0xb757e0 00:23:34.542 [2024-10-30 14:10:32.656277] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.656281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.656285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb757e0) 00:23:34.542 [2024-10-30 14:10:32.656291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.542 [2024-10-30 14:10:32.656298] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.656301] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.542 [2024-10-30 14:10:32.656309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb757e0) 00:23:34.542 [2024-10-30 14:10:32.656315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.543 [2024-10-30 14:10:32.656321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.656325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.656328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb757e0) 00:23:34.543 [2024-10-30 14:10:32.656334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.543 [2024-10-30 14:10:32.656340] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.656344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.656348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb757e0) 00:23:34.543 [2024-10-30 14:10:32.656353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.543 [2024-10-30 14:10:32.656358] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:34.543 [2024-10-30 14:10:32.656367] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:34.543 [2024-10-30 14:10:32.656373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.656377] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb757e0) 00:23:34.543 [2024-10-30 14:10:32.656384] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-10-30 14:10:32.656398] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd7240, cid 0, qid 0 00:23:34.543 [2024-10-30 14:10:32.656403] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd73c0, cid 1, qid 0 00:23:34.543 [2024-10-30 14:10:32.656408] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd7540, cid 2, qid 0 00:23:34.543 [2024-10-30 14:10:32.656413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd76c0, cid 3, qid 0 00:23:34.543 [2024-10-30 14:10:32.656418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd7840, cid 4, qid 0 00:23:34.543 [2024-10-30 14:10:32.656648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.543 [2024-10-30 14:10:32.656655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.543 [2024-10-30 14:10:32.656658] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.656662] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd7840) on tqpair=0xb757e0 00:23:34.543 [2024-10-30 14:10:32.656670] nvme_ctrlr.c:3023:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:34.543 [2024-10-30 14:10:32.656676] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:34.543 [2024-10-30 14:10:32.656685] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:34.543 [2024-10-30 14:10:32.656692] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:34.543 [2024-10-30 14:10:32.656698] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.656702] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.656706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb757e0) 00:23:34.543 [2024-10-30 14:10:32.656712] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:34.543 [2024-10-30 14:10:32.656725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd7840, cid 4, qid 0 00:23:34.543 [2024-10-30 14:10:32.660759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.543 [2024-10-30 14:10:32.660768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.543 [2024-10-30 14:10:32.660772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.660776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd7840) on tqpair=0xb757e0 00:23:34.543 [2024-10-30 14:10:32.660847] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:34.543 [2024-10-30 14:10:32.660858] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:34.543 [2024-10-30 14:10:32.660866] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.660870] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb757e0) 00:23:34.543 [2024-10-30 14:10:32.660876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-10-30 14:10:32.660889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd7840, cid 4, qid 0 00:23:34.543 [2024-10-30 14:10:32.661107] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:34.543 [2024-10-30 14:10:32.661113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:34.543 [2024-10-30 14:10:32.661117] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.661121] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb757e0): datao=0, datal=4096, cccid=4 00:23:34.543 [2024-10-30 14:10:32.661125] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbd7840) on tqpair(0xb757e0): expected_datao=0, payload_size=4096 00:23:34.543 [2024-10-30 14:10:32.661130] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.661137] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.661141] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.661286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.543 [2024-10-30 14:10:32.661292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.543 [2024-10-30 14:10:32.661296] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.661300] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd7840) on tqpair=0xb757e0 00:23:34.543 [2024-10-30 14:10:32.661312] nvme_ctrlr.c:4699:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:34.543 [2024-10-30 14:10:32.661322] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:34.543 [2024-10-30 14:10:32.661331] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:34.543 [2024-10-30 14:10:32.661338] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.661342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb757e0) 00:23:34.543 [2024-10-30 14:10:32.661349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-10-30 14:10:32.661359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd7840, cid 4, qid 0 00:23:34.543 [2024-10-30 14:10:32.661547] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:34.543 [2024-10-30 14:10:32.661553] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:34.543 [2024-10-30 14:10:32.661556] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.661560] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb757e0): datao=0, datal=4096, cccid=4 00:23:34.543 [2024-10-30 14:10:32.661567] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbd7840) on tqpair(0xb757e0): expected_datao=0, payload_size=4096 00:23:34.543 [2024-10-30 14:10:32.661572] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.661587] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.661592] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.661766] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.543 [2024-10-30 14:10:32.661773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.543 [2024-10-30 14:10:32.661776] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.661780] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd7840) on tqpair=0xb757e0 00:23:34.543 [2024-10-30 14:10:32.661795] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:34.543 [2024-10-30 14:10:32.661805] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:34.543 [2024-10-30 14:10:32.661813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.661817] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb757e0) 00:23:34.543 [2024-10-30 14:10:32.661823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-10-30 14:10:32.661835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd7840, cid 4, qid 0 00:23:34.543 [2024-10-30 14:10:32.662034] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:34.543 [2024-10-30 14:10:32.662041] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:34.543 [2024-10-30 14:10:32.662044] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.662048] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb757e0): datao=0, datal=4096, cccid=4 00:23:34.543 [2024-10-30 14:10:32.662052] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbd7840) on tqpair(0xb757e0): expected_datao=0, payload_size=4096 00:23:34.543 [2024-10-30 14:10:32.662057] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.662063] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.662067] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.662241] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.543 [2024-10-30 14:10:32.662248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.543 [2024-10-30 14:10:32.662251] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.662255] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd7840) on tqpair=0xb757e0 00:23:34.543 [2024-10-30 14:10:32.662263] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:34.543 [2024-10-30 14:10:32.662271] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:34.543 [2024-10-30 14:10:32.662280] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:34.543 [2024-10-30 14:10:32.662286] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:34.543 [2024-10-30 14:10:32.662292] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:34.543 [2024-10-30 14:10:32.662297] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:34.543 [2024-10-30 14:10:32.662305] nvme_ctrlr.c:3111:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:34.543 [2024-10-30 14:10:32.662310] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:34.543 [2024-10-30 14:10:32.662316] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:34.543 [2024-10-30 14:10:32.662333] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.662336] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb757e0) 00:23:34.543 [2024-10-30 14:10:32.662343] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-10-30 14:10:32.662350] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.543 [2024-10-30 14:10:32.662354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.662357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb757e0) 00:23:34.544 [2024-10-30 14:10:32.662364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.544 [2024-10-30 14:10:32.662378] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd7840, cid 4, qid 0 00:23:34.544 [2024-10-30 14:10:32.662383] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd79c0, cid 5, qid 0 00:23:34.544 [2024-10-30 14:10:32.662591] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.544 [2024-10-30 14:10:32.662597] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.544 [2024-10-30 14:10:32.662601] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.662605] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd7840) on tqpair=0xb757e0 00:23:34.544 [2024-10-30 14:10:32.662612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.544 [2024-10-30 14:10:32.662617] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.544 [2024-10-30 14:10:32.662621] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.662625] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd79c0) on tqpair=0xb757e0 00:23:34.544 [2024-10-30 14:10:32.662634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.662638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb757e0) 00:23:34.544 [2024-10-30 14:10:32.662644] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.544 [2024-10-30 14:10:32.662654] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd79c0, cid 5, qid 0 00:23:34.544 [2024-10-30 14:10:32.662849] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.544 [2024-10-30 14:10:32.662856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.544 [2024-10-30 14:10:32.662859] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.662863] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd79c0) on tqpair=0xb757e0 00:23:34.544 [2024-10-30 14:10:32.662872] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.662876] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb757e0) 00:23:34.544 [2024-10-30 14:10:32.662882] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.544 [2024-10-30 14:10:32.662893] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd79c0, cid 5, qid 0 00:23:34.544 [2024-10-30 14:10:32.663071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.544 [2024-10-30 14:10:32.663077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.544 [2024-10-30 14:10:32.663081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd79c0) on tqpair=0xb757e0 00:23:34.544 [2024-10-30 14:10:32.663096] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663100] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb757e0) 00:23:34.544 [2024-10-30 14:10:32.663106] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.544 [2024-10-30 14:10:32.663117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd79c0, cid 5, qid 0 00:23:34.544 [2024-10-30 14:10:32.663292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.544 [2024-10-30 14:10:32.663298] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.544 [2024-10-30 14:10:32.663301] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663305] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd79c0) on tqpair=0xb757e0 00:23:34.544 [2024-10-30 14:10:32.663320] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb757e0) 00:23:34.544 [2024-10-30 14:10:32.663331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.544 [2024-10-30 14:10:32.663338] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb757e0) 00:23:34.544 [2024-10-30 14:10:32.663348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.544 [2024-10-30 14:10:32.663356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xb757e0) 00:23:34.544 [2024-10-30 14:10:32.663366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.544 [2024-10-30 14:10:32.663375] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb757e0) 00:23:34.544 [2024-10-30 14:10:32.663386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.544 [2024-10-30 14:10:32.663397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd79c0, cid 5, qid 0 00:23:34.544 [2024-10-30 14:10:32.663403] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd7840, cid 4, qid 0 00:23:34.544 [2024-10-30 14:10:32.663407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd7b40, cid 6, qid 0 00:23:34.544 [2024-10-30 14:10:32.663412] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd7cc0, cid 7, qid 0 00:23:34.544 [2024-10-30 14:10:32.663678] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:34.544 [2024-10-30 14:10:32.663684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:34.544 [2024-10-30 14:10:32.663687] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663691] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb757e0): datao=0, datal=8192, cccid=5 00:23:34.544 [2024-10-30 14:10:32.663695] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbd79c0) on tqpair(0xb757e0): expected_datao=0, payload_size=8192 00:23:34.544 [2024-10-30 14:10:32.663700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663803] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663808] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:34.544 [2024-10-30 14:10:32.663822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:34.544 [2024-10-30 14:10:32.663826] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663829] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb757e0): datao=0, datal=512, cccid=4 00:23:34.544 [2024-10-30 14:10:32.663834] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbd7840) on tqpair(0xb757e0): expected_datao=0, payload_size=512 00:23:34.544 [2024-10-30 14:10:32.663838] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663844] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663848] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:34.544 [2024-10-30 14:10:32.663859] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:34.544 [2024-10-30 14:10:32.663863] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663866] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb757e0): datao=0, datal=512, cccid=6 00:23:34.544 [2024-10-30 14:10:32.663871] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbd7b40) on tqpair(0xb757e0): expected_datao=0, payload_size=512 00:23:34.544 [2024-10-30 14:10:32.663875] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663881] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663885] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:34.544 [2024-10-30 14:10:32.663896] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:34.544 [2024-10-30 14:10:32.663900] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663903] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb757e0): datao=0, datal=4096, cccid=7 00:23:34.544 [2024-10-30 14:10:32.663907] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbd7cc0) on tqpair(0xb757e0): expected_datao=0, payload_size=4096 00:23:34.544 [2024-10-30 14:10:32.663912] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663924] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663927] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663937] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.544 [2024-10-30 14:10:32.663943] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.544 [2024-10-30 14:10:32.663947] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663951] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd79c0) on tqpair=0xb757e0 00:23:34.544 [2024-10-30 14:10:32.663966] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.544 [2024-10-30 14:10:32.663972] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.544 [2024-10-30 14:10:32.663975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.663979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd7840) on tqpair=0xb757e0 00:23:34.544 [2024-10-30 14:10:32.663990] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.544 [2024-10-30 14:10:32.663996] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.544 [2024-10-30 14:10:32.664000] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.664004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd7b40) on tqpair=0xb757e0 00:23:34.544 [2024-10-30 14:10:32.664011] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.544 [2024-10-30 14:10:32.664017] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.544 [2024-10-30 14:10:32.664020] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.544 [2024-10-30 14:10:32.664025] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd7cc0) on tqpair=0xb757e0 00:23:34.544 ===================================================== 00:23:34.544 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:34.544 ===================================================== 00:23:34.544 Controller Capabilities/Features 00:23:34.544 ================================ 00:23:34.544 Vendor ID: 8086 00:23:34.544 Subsystem Vendor ID: 8086 00:23:34.544 Serial Number: SPDK00000000000001 00:23:34.544 Model Number: SPDK bdev Controller 00:23:34.544 Firmware Version: 25.01 00:23:34.544 Recommended Arb Burst: 6 00:23:34.544 IEEE OUI Identifier: e4 d2 5c 00:23:34.544 Multi-path I/O 00:23:34.544 May have multiple subsystem ports: Yes 00:23:34.544 May have multiple controllers: Yes 00:23:34.544 Associated with SR-IOV VF: No 00:23:34.544 Max Data Transfer Size: 131072 00:23:34.544 Max Number of Namespaces: 32 00:23:34.544 Max Number of I/O Queues: 127 00:23:34.544 NVMe Specification Version (VS): 1.3 00:23:34.544 NVMe Specification Version (Identify): 1.3 00:23:34.545 Maximum Queue Entries: 128 00:23:34.545 Contiguous Queues Required: Yes 00:23:34.545 Arbitration Mechanisms Supported 00:23:34.545 Weighted Round Robin: Not Supported 00:23:34.545 Vendor Specific: Not Supported 00:23:34.545 Reset Timeout: 15000 ms 00:23:34.545 Doorbell Stride: 4 bytes 00:23:34.545 NVM Subsystem Reset: Not Supported 00:23:34.545 Command Sets Supported 00:23:34.545 NVM Command Set: Supported 00:23:34.545 Boot Partition: Not Supported 00:23:34.545 Memory Page Size Minimum: 4096 bytes 00:23:34.545 Memory Page Size Maximum: 4096 bytes 00:23:34.545 Persistent Memory Region: Not Supported 00:23:34.545 Optional Asynchronous Events Supported 00:23:34.545 Namespace Attribute Notices: Supported 00:23:34.545 Firmware Activation Notices: Not Supported 00:23:34.545 ANA Change Notices: Not Supported 00:23:34.545 PLE Aggregate Log Change Notices: Not Supported 00:23:34.545 LBA Status Info Alert Notices: Not Supported 00:23:34.545 EGE Aggregate Log Change Notices: Not Supported 00:23:34.545 Normal NVM Subsystem Shutdown event: Not Supported 00:23:34.545 Zone Descriptor Change Notices: Not Supported 00:23:34.545 Discovery Log Change Notices: Not Supported 00:23:34.545 Controller Attributes 00:23:34.545 128-bit Host Identifier: Supported 00:23:34.545 Non-Operational Permissive Mode: Not Supported 00:23:34.545 NVM Sets: Not Supported 00:23:34.545 Read Recovery Levels: Not Supported 00:23:34.545 Endurance Groups: Not Supported 00:23:34.545 Predictable Latency Mode: Not Supported 00:23:34.545 Traffic Based Keep ALive: Not Supported 00:23:34.545 Namespace Granularity: Not Supported 00:23:34.545 SQ Associations: Not Supported 00:23:34.545 UUID List: Not Supported 00:23:34.545 Multi-Domain Subsystem: Not Supported 00:23:34.545 Fixed Capacity Management: Not Supported 00:23:34.545 Variable Capacity Management: Not Supported 00:23:34.545 Delete Endurance Group: Not Supported 00:23:34.545 Delete NVM Set: Not Supported 00:23:34.545 Extended LBA Formats Supported: Not Supported 00:23:34.545 Flexible Data Placement Supported: Not Supported 00:23:34.545 00:23:34.545 Controller Memory Buffer Support 00:23:34.545 ================================ 00:23:34.545 Supported: No 00:23:34.545 00:23:34.545 Persistent Memory Region Support 00:23:34.545 ================================ 00:23:34.545 Supported: No 00:23:34.545 00:23:34.545 Admin Command Set Attributes 00:23:34.545 ============================ 00:23:34.545 Security Send/Receive: Not Supported 00:23:34.545 Format NVM: Not Supported 00:23:34.545 Firmware Activate/Download: Not Supported 00:23:34.545 Namespace Management: Not Supported 00:23:34.545 Device Self-Test: Not Supported 00:23:34.545 Directives: Not Supported 00:23:34.545 NVMe-MI: Not Supported 00:23:34.545 Virtualization Management: Not Supported 00:23:34.545 Doorbell Buffer Config: Not Supported 00:23:34.545 Get LBA Status Capability: Not Supported 00:23:34.545 Command & Feature Lockdown Capability: Not Supported 00:23:34.545 Abort Command Limit: 4 00:23:34.545 Async Event Request Limit: 4 00:23:34.545 Number of Firmware Slots: N/A 00:23:34.545 Firmware Slot 1 Read-Only: N/A 00:23:34.545 Firmware Activation Without Reset: N/A 00:23:34.545 Multiple Update Detection Support: N/A 00:23:34.545 Firmware Update Granularity: No Information Provided 00:23:34.545 Per-Namespace SMART Log: No 00:23:34.545 Asymmetric Namespace Access Log Page: Not Supported 00:23:34.545 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:34.545 Command Effects Log Page: Supported 00:23:34.545 Get Log Page Extended Data: Supported 00:23:34.545 Telemetry Log Pages: Not Supported 00:23:34.545 Persistent Event Log Pages: Not Supported 00:23:34.545 Supported Log Pages Log Page: May Support 00:23:34.545 Commands Supported & Effects Log Page: Not Supported 00:23:34.545 Feature Identifiers & Effects Log Page:May Support 00:23:34.545 NVMe-MI Commands & Effects Log Page: May Support 00:23:34.545 Data Area 4 for Telemetry Log: Not Supported 00:23:34.545 Error Log Page Entries Supported: 128 00:23:34.545 Keep Alive: Supported 00:23:34.545 Keep Alive Granularity: 10000 ms 00:23:34.545 00:23:34.545 NVM Command Set Attributes 00:23:34.545 ========================== 00:23:34.545 Submission Queue Entry Size 00:23:34.545 Max: 64 00:23:34.545 Min: 64 00:23:34.545 Completion Queue Entry Size 00:23:34.545 Max: 16 00:23:34.545 Min: 16 00:23:34.545 Number of Namespaces: 32 00:23:34.545 Compare Command: Supported 00:23:34.545 Write Uncorrectable Command: Not Supported 00:23:34.545 Dataset Management Command: Supported 00:23:34.545 Write Zeroes Command: Supported 00:23:34.545 Set Features Save Field: Not Supported 00:23:34.545 Reservations: Supported 00:23:34.545 Timestamp: Not Supported 00:23:34.545 Copy: Supported 00:23:34.545 Volatile Write Cache: Present 00:23:34.545 Atomic Write Unit (Normal): 1 00:23:34.545 Atomic Write Unit (PFail): 1 00:23:34.545 Atomic Compare & Write Unit: 1 00:23:34.545 Fused Compare & Write: Supported 00:23:34.545 Scatter-Gather List 00:23:34.545 SGL Command Set: Supported 00:23:34.545 SGL Keyed: Supported 00:23:34.545 SGL Bit Bucket Descriptor: Not Supported 00:23:34.545 SGL Metadata Pointer: Not Supported 00:23:34.545 Oversized SGL: Not Supported 00:23:34.545 SGL Metadata Address: Not Supported 00:23:34.545 SGL Offset: Supported 00:23:34.545 Transport SGL Data Block: Not Supported 00:23:34.545 Replay Protected Memory Block: Not Supported 00:23:34.545 00:23:34.545 Firmware Slot Information 00:23:34.545 ========================= 00:23:34.545 Active slot: 1 00:23:34.545 Slot 1 Firmware Revision: 25.01 00:23:34.545 00:23:34.545 00:23:34.545 Commands Supported and Effects 00:23:34.545 ============================== 00:23:34.545 Admin Commands 00:23:34.545 -------------- 00:23:34.545 Get Log Page (02h): Supported 00:23:34.545 Identify (06h): Supported 00:23:34.545 Abort (08h): Supported 00:23:34.545 Set Features (09h): Supported 00:23:34.545 Get Features (0Ah): Supported 00:23:34.545 Asynchronous Event Request (0Ch): Supported 00:23:34.545 Keep Alive (18h): Supported 00:23:34.545 I/O Commands 00:23:34.545 ------------ 00:23:34.545 Flush (00h): Supported LBA-Change 00:23:34.545 Write (01h): Supported LBA-Change 00:23:34.545 Read (02h): Supported 00:23:34.545 Compare (05h): Supported 00:23:34.545 Write Zeroes (08h): Supported LBA-Change 00:23:34.545 Dataset Management (09h): Supported LBA-Change 00:23:34.545 Copy (19h): Supported LBA-Change 00:23:34.545 00:23:34.545 Error Log 00:23:34.545 ========= 00:23:34.545 00:23:34.545 Arbitration 00:23:34.545 =========== 00:23:34.545 Arbitration Burst: 1 00:23:34.545 00:23:34.545 Power Management 00:23:34.545 ================ 00:23:34.545 Number of Power States: 1 00:23:34.545 Current Power State: Power State #0 00:23:34.545 Power State #0: 00:23:34.545 Max Power: 0.00 W 00:23:34.545 Non-Operational State: Operational 00:23:34.545 Entry Latency: Not Reported 00:23:34.545 Exit Latency: Not Reported 00:23:34.545 Relative Read Throughput: 0 00:23:34.545 Relative Read Latency: 0 00:23:34.545 Relative Write Throughput: 0 00:23:34.545 Relative Write Latency: 0 00:23:34.545 Idle Power: Not Reported 00:23:34.545 Active Power: Not Reported 00:23:34.545 Non-Operational Permissive Mode: Not Supported 00:23:34.545 00:23:34.545 Health Information 00:23:34.545 ================== 00:23:34.545 Critical Warnings: 00:23:34.545 Available Spare Space: OK 00:23:34.545 Temperature: OK 00:23:34.545 Device Reliability: OK 00:23:34.545 Read Only: No 00:23:34.545 Volatile Memory Backup: OK 00:23:34.545 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:34.545 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:34.545 Available Spare: 0% 00:23:34.545 Available Spare Threshold: 0% 00:23:34.545 Life Percentage Used:[2024-10-30 14:10:32.664125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.545 [2024-10-30 14:10:32.664130] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb757e0) 00:23:34.545 [2024-10-30 14:10:32.664137] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.545 [2024-10-30 14:10:32.664149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd7cc0, cid 7, qid 0 00:23:34.545 [2024-10-30 14:10:32.664379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.545 [2024-10-30 14:10:32.664386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.545 [2024-10-30 14:10:32.664389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.545 [2024-10-30 14:10:32.664393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd7cc0) on tqpair=0xb757e0 00:23:34.545 [2024-10-30 14:10:32.664427] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:34.545 [2024-10-30 14:10:32.664437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd7240) on tqpair=0xb757e0 00:23:34.545 [2024-10-30 14:10:32.664444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.545 [2024-10-30 14:10:32.664450] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd73c0) on tqpair=0xb757e0 00:23:34.545 [2024-10-30 14:10:32.664454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.545 [2024-10-30 14:10:32.664459] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd7540) on tqpair=0xb757e0 00:23:34.545 [2024-10-30 14:10:32.664464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.545 [2024-10-30 14:10:32.664469] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd76c0) on tqpair=0xb757e0 00:23:34.546 [2024-10-30 14:10:32.664474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.546 [2024-10-30 14:10:32.664482] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.664486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.664489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb757e0) 00:23:34.546 [2024-10-30 14:10:32.664496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.546 [2024-10-30 14:10:32.664509] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd76c0, cid 3, qid 0 00:23:34.546 [2024-10-30 14:10:32.664722] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.546 [2024-10-30 14:10:32.664728] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.546 [2024-10-30 14:10:32.664732] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.664736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd76c0) on tqpair=0xb757e0 00:23:34.546 [2024-10-30 14:10:32.664743] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.668757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.668761] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb757e0) 00:23:34.546 [2024-10-30 14:10:32.668768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.546 [2024-10-30 14:10:32.668784] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd76c0, cid 3, qid 0 00:23:34.546 [2024-10-30 14:10:32.668969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.546 [2024-10-30 14:10:32.668976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.546 [2024-10-30 14:10:32.668979] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.668986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd76c0) on tqpair=0xb757e0 00:23:34.546 [2024-10-30 14:10:32.668992] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:34.546 [2024-10-30 14:10:32.668996] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:34.546 [2024-10-30 14:10:32.669006] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.669010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.669013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb757e0) 00:23:34.546 [2024-10-30 14:10:32.669020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.546 [2024-10-30 14:10:32.669031] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd76c0, cid 3, qid 0 00:23:34.546 [2024-10-30 14:10:32.669209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.546 [2024-10-30 14:10:32.669216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.546 [2024-10-30 14:10:32.669219] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.669223] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd76c0) on tqpair=0xb757e0 00:23:34.546 [2024-10-30 14:10:32.669233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.669237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.669241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb757e0) 00:23:34.546 [2024-10-30 14:10:32.669248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.546 [2024-10-30 14:10:32.669258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd76c0, cid 3, qid 0 00:23:34.546 [2024-10-30 14:10:32.669462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.546 [2024-10-30 14:10:32.669468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.546 [2024-10-30 14:10:32.669472] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.669476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd76c0) on tqpair=0xb757e0 00:23:34.546 [2024-10-30 14:10:32.669485] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.669489] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.669493] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb757e0) 00:23:34.546 [2024-10-30 14:10:32.669500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.546 [2024-10-30 14:10:32.669510] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd76c0, cid 3, qid 0 00:23:34.546 [2024-10-30 14:10:32.669734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.546 [2024-10-30 14:10:32.669740] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.546 [2024-10-30 14:10:32.669743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.669753] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd76c0) on tqpair=0xb757e0 00:23:34.546 [2024-10-30 14:10:32.669764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.669768] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.669771] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb757e0) 00:23:34.546 [2024-10-30 14:10:32.669778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.546 [2024-10-30 14:10:32.669789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd76c0, cid 3, qid 0 00:23:34.546 [2024-10-30 14:10:32.669994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.546 [2024-10-30 14:10:32.670005] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.546 [2024-10-30 14:10:32.670009] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.670013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd76c0) on tqpair=0xb757e0 00:23:34.546 [2024-10-30 14:10:32.670023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.670027] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.670030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb757e0) 00:23:34.546 [2024-10-30 14:10:32.670037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.546 [2024-10-30 14:10:32.670047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd76c0, cid 3, qid 0 00:23:34.546 [2024-10-30 14:10:32.670220] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.546 [2024-10-30 14:10:32.670226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.546 [2024-10-30 14:10:32.670230] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.670233] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd76c0) on tqpair=0xb757e0 00:23:34.546 [2024-10-30 14:10:32.670243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.670247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.670251] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb757e0) 00:23:34.546 [2024-10-30 14:10:32.670258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.546 [2024-10-30 14:10:32.670268] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd76c0, cid 3, qid 0 00:23:34.546 [2024-10-30 14:10:32.670485] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.546 [2024-10-30 14:10:32.670492] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.546 [2024-10-30 14:10:32.670495] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.670499] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd76c0) on tqpair=0xb757e0 00:23:34.546 [2024-10-30 14:10:32.670509] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.670513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.670516] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb757e0) 00:23:34.546 [2024-10-30 14:10:32.670523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.546 [2024-10-30 14:10:32.670533] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd76c0, cid 3, qid 0 00:23:34.546 [2024-10-30 14:10:32.670737] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.546 [2024-10-30 14:10:32.670743] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.546 [2024-10-30 14:10:32.670754] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.546 [2024-10-30 14:10:32.670758] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd76c0) on tqpair=0xb757e0 00:23:34.547 [2024-10-30 14:10:32.670768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.670772] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.670776] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb757e0) 00:23:34.547 [2024-10-30 14:10:32.670782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.547 [2024-10-30 14:10:32.670793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd76c0, cid 3, qid 0 00:23:34.547 [2024-10-30 14:10:32.670972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.547 [2024-10-30 14:10:32.670978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.547 [2024-10-30 14:10:32.670984] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.670988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd76c0) on tqpair=0xb757e0 00:23:34.547 [2024-10-30 14:10:32.670998] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.671002] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.671005] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb757e0) 00:23:34.547 [2024-10-30 14:10:32.671012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.547 [2024-10-30 14:10:32.671022] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd76c0, cid 3, qid 0 00:23:34.547 [2024-10-30 14:10:32.671225] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.547 [2024-10-30 14:10:32.671231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.547 [2024-10-30 14:10:32.671234] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.671238] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd76c0) on tqpair=0xb757e0 00:23:34.547 [2024-10-30 14:10:32.671248] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.671252] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.671255] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb757e0) 00:23:34.547 [2024-10-30 14:10:32.671262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.547 [2024-10-30 14:10:32.671273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd76c0, cid 3, qid 0 00:23:34.547 [2024-10-30 14:10:32.671451] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.547 [2024-10-30 14:10:32.671457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.547 [2024-10-30 14:10:32.671460] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.671464] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd76c0) on tqpair=0xb757e0 00:23:34.547 [2024-10-30 14:10:32.671474] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.671478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.671481] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb757e0) 00:23:34.547 [2024-10-30 14:10:32.671488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.547 [2024-10-30 14:10:32.671499] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd76c0, cid 3, qid 0 00:23:34.547 [2024-10-30 14:10:32.671661] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.547 [2024-10-30 14:10:32.671667] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.547 [2024-10-30 14:10:32.671670] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.671674] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd76c0) on tqpair=0xb757e0 00:23:34.547 [2024-10-30 14:10:32.671684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.671688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.671692] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb757e0) 00:23:34.547 [2024-10-30 14:10:32.671698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.547 [2024-10-30 14:10:32.671709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd76c0, cid 3, qid 0 00:23:34.547 [2024-10-30 14:10:32.671919] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.547 [2024-10-30 14:10:32.671926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.547 [2024-10-30 14:10:32.671929] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.671935] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd76c0) on tqpair=0xb757e0 00:23:34.547 [2024-10-30 14:10:32.671945] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.671949] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.671953] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb757e0) 00:23:34.547 [2024-10-30 14:10:32.671959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.547 [2024-10-30 14:10:32.671970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd76c0, cid 3, qid 0 00:23:34.547 [2024-10-30 14:10:32.672147] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.547 [2024-10-30 14:10:32.672153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.547 [2024-10-30 14:10:32.672157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.672161] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd76c0) on tqpair=0xb757e0 00:23:34.547 [2024-10-30 14:10:32.672171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.672174] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.672178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb757e0) 00:23:34.547 [2024-10-30 14:10:32.672185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.547 [2024-10-30 14:10:32.672195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd76c0, cid 3, qid 0 00:23:34.547 [2024-10-30 14:10:32.672363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.547 [2024-10-30 14:10:32.672369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.547 [2024-10-30 14:10:32.672373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.672377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd76c0) on tqpair=0xb757e0 00:23:34.547 [2024-10-30 14:10:32.672386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.672390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.672394] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb757e0) 00:23:34.547 [2024-10-30 14:10:32.672401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.547 [2024-10-30 14:10:32.672411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd76c0, cid 3, qid 0 00:23:34.547 [2024-10-30 14:10:32.672576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.547 [2024-10-30 14:10:32.672583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.547 [2024-10-30 14:10:32.672586] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.672590] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd76c0) on tqpair=0xb757e0 00:23:34.547 [2024-10-30 14:10:32.672600] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.672604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.672607] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb757e0) 00:23:34.547 [2024-10-30 14:10:32.672614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.547 [2024-10-30 14:10:32.672624] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd76c0, cid 3, qid 0 00:23:34.547 [2024-10-30 14:10:32.676759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.547 [2024-10-30 14:10:32.676767] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.547 [2024-10-30 14:10:32.676770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.676774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd76c0) on tqpair=0xb757e0 00:23:34.547 [2024-10-30 14:10:32.676787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.676792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.676795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb757e0) 00:23:34.547 [2024-10-30 14:10:32.676802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.547 [2024-10-30 14:10:32.676814] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd76c0, cid 3, qid 0 00:23:34.547 [2024-10-30 14:10:32.676987] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:34.547 [2024-10-30 14:10:32.676994] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:34.547 [2024-10-30 14:10:32.676997] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:34.547 [2024-10-30 14:10:32.677001] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd76c0) on tqpair=0xb757e0 00:23:34.547 [2024-10-30 14:10:32.677009] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:23:34.547 0% 00:23:34.547 Data Units Read: 0 00:23:34.547 Data Units Written: 0 00:23:34.547 Host Read Commands: 0 00:23:34.547 Host Write Commands: 0 00:23:34.547 Controller Busy Time: 0 minutes 00:23:34.547 Power Cycles: 0 00:23:34.547 Power On Hours: 0 hours 00:23:34.547 Unsafe Shutdowns: 0 00:23:34.547 Unrecoverable Media Errors: 0 00:23:34.547 Lifetime Error Log Entries: 0 00:23:34.547 Warning Temperature Time: 0 minutes 00:23:34.547 Critical Temperature Time: 0 minutes 00:23:34.547 00:23:34.547 Number of Queues 00:23:34.547 ================ 00:23:34.547 Number of I/O Submission Queues: 127 00:23:34.547 Number of I/O Completion Queues: 127 00:23:34.547 00:23:34.547 Active Namespaces 00:23:34.547 ================= 00:23:34.547 Namespace ID:1 00:23:34.547 Error Recovery Timeout: Unlimited 00:23:34.547 Command Set Identifier: NVM (00h) 00:23:34.547 Deallocate: Supported 00:23:34.547 Deallocated/Unwritten Error: Not Supported 00:23:34.547 Deallocated Read Value: Unknown 00:23:34.547 Deallocate in Write Zeroes: Not Supported 00:23:34.547 Deallocated Guard Field: 0xFFFF 00:23:34.547 Flush: Supported 00:23:34.547 Reservation: Supported 00:23:34.547 Namespace Sharing Capabilities: Multiple Controllers 00:23:34.547 Size (in LBAs): 131072 (0GiB) 00:23:34.547 Capacity (in LBAs): 131072 (0GiB) 00:23:34.547 Utilization (in LBAs): 131072 (0GiB) 00:23:34.547 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:34.547 EUI64: ABCDEF0123456789 00:23:34.547 UUID: 33673ee0-aa0a-46a2-867c-3a10f18c2e24 00:23:34.547 Thin Provisioning: Not Supported 00:23:34.547 Per-NS Atomic Units: Yes 00:23:34.547 Atomic Boundary Size (Normal): 0 00:23:34.547 Atomic Boundary Size (PFail): 0 00:23:34.547 Atomic Boundary Offset: 0 00:23:34.547 Maximum Single Source Range Length: 65535 00:23:34.548 Maximum Copy Length: 65535 00:23:34.548 Maximum Source Range Count: 1 00:23:34.548 NGUID/EUI64 Never Reused: No 00:23:34.548 Namespace Write Protected: No 00:23:34.548 Number of LBA Formats: 1 00:23:34.548 Current LBA Format: LBA Format #00 00:23:34.548 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:34.548 00:23:34.548 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:34.548 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:34.548 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.548 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.548 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.548 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:34.548 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:34.548 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:34.548 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:34.548 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:34.548 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:34.548 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:34.548 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:34.548 rmmod nvme_tcp 00:23:34.548 rmmod nvme_fabrics 00:23:34.548 rmmod nvme_keyring 00:23:34.548 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:34.548 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:34.548 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:34.548 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1115140 ']' 00:23:34.548 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1115140 00:23:34.548 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1115140 ']' 00:23:34.548 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1115140 00:23:34.548 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:23:34.548 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:34.548 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1115140 00:23:34.809 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:34.809 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:34.809 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1115140' 00:23:34.809 killing process with pid 1115140 00:23:34.809 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1115140 00:23:34.809 14:10:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1115140 00:23:34.809 14:10:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:34.809 14:10:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:34.809 14:10:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:34.809 14:10:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:34.809 14:10:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:23:34.809 14:10:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:34.809 14:10:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:23:34.809 14:10:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:34.809 14:10:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:34.809 14:10:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.809 14:10:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.809 14:10:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.354 14:10:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:37.355 00:23:37.355 real 0m11.692s 00:23:37.355 user 0m8.796s 00:23:37.355 sys 0m6.157s 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:37.355 ************************************ 00:23:37.355 END TEST nvmf_identify 00:23:37.355 ************************************ 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.355 ************************************ 00:23:37.355 START TEST nvmf_perf 00:23:37.355 ************************************ 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:37.355 * Looking for test storage... 00:23:37.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:37.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.355 --rc genhtml_branch_coverage=1 00:23:37.355 --rc genhtml_function_coverage=1 00:23:37.355 --rc genhtml_legend=1 00:23:37.355 --rc geninfo_all_blocks=1 00:23:37.355 --rc geninfo_unexecuted_blocks=1 00:23:37.355 00:23:37.355 ' 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:37.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.355 --rc genhtml_branch_coverage=1 00:23:37.355 --rc genhtml_function_coverage=1 00:23:37.355 --rc genhtml_legend=1 00:23:37.355 --rc geninfo_all_blocks=1 00:23:37.355 --rc geninfo_unexecuted_blocks=1 00:23:37.355 00:23:37.355 ' 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:37.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.355 --rc genhtml_branch_coverage=1 00:23:37.355 --rc genhtml_function_coverage=1 00:23:37.355 --rc genhtml_legend=1 00:23:37.355 --rc geninfo_all_blocks=1 00:23:37.355 --rc geninfo_unexecuted_blocks=1 00:23:37.355 00:23:37.355 ' 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:37.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.355 --rc genhtml_branch_coverage=1 00:23:37.355 --rc genhtml_function_coverage=1 00:23:37.355 --rc genhtml_legend=1 00:23:37.355 --rc geninfo_all_blocks=1 00:23:37.355 --rc geninfo_unexecuted_blocks=1 00:23:37.355 00:23:37.355 ' 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:37.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:37.355 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:37.356 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:37.356 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:37.356 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:37.356 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:37.356 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:37.356 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:37.356 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:37.356 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:37.356 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.356 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:37.356 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.356 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:37.356 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:37.356 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:37.356 14:10:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:45.502 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:45.502 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:45.502 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:45.502 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:45.502 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:45.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:45.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:23:45.503 00:23:45.503 --- 10.0.0.2 ping statistics --- 00:23:45.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.503 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:45.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:45.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:23:45.503 00:23:45.503 --- 10.0.0.1 ping statistics --- 00:23:45.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.503 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1119782 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1119782 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1119782 ']' 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:45.503 14:10:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:45.503 [2024-10-30 14:10:43.016813] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:23:45.503 [2024-10-30 14:10:43.016881] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.503 [2024-10-30 14:10:43.114390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:45.503 [2024-10-30 14:10:43.167043] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:45.503 [2024-10-30 14:10:43.167097] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:45.503 [2024-10-30 14:10:43.167110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:45.503 [2024-10-30 14:10:43.167117] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:45.503 [2024-10-30 14:10:43.167123] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:45.503 [2024-10-30 14:10:43.169121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.503 [2024-10-30 14:10:43.169280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:45.503 [2024-10-30 14:10:43.169441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.503 [2024-10-30 14:10:43.169442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:45.765 14:10:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:45.765 14:10:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:23:45.765 14:10:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:45.765 14:10:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:45.765 14:10:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:45.765 14:10:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.765 14:10:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:45.765 14:10:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:46.337 14:10:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:46.337 14:10:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:46.337 14:10:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:23:46.337 14:10:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:46.598 14:10:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:46.598 14:10:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:23:46.598 14:10:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:46.598 14:10:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:46.598 14:10:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:46.859 [2024-10-30 14:10:45.004606] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.859 14:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:47.119 14:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:47.120 14:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:47.120 14:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:47.120 14:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:47.379 14:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:47.639 [2024-10-30 14:10:45.751487] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.639 14:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:47.899 14:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:23:47.899 14:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:47.899 14:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:47.899 14:10:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:49.282 Initializing NVMe Controllers 00:23:49.282 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:23:49.282 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:23:49.282 Initialization complete. Launching workers. 00:23:49.282 ======================================================== 00:23:49.282 Latency(us) 00:23:49.282 Device Information : IOPS MiB/s Average min max 00:23:49.282 PCIE (0000:65:00.0) NSID 1 from core 0: 79388.13 310.11 402.96 13.18 5309.66 00:23:49.282 ======================================================== 00:23:49.282 Total : 79388.13 310.11 402.96 13.18 5309.66 00:23:49.282 00:23:49.282 14:10:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:50.669 Initializing NVMe Controllers 00:23:50.669 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:50.669 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:50.669 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:50.669 Initialization complete. Launching workers. 00:23:50.669 ======================================================== 00:23:50.669 Latency(us) 00:23:50.670 Device Information : IOPS MiB/s Average min max 00:23:50.670 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 72.00 0.28 14427.94 261.74 44724.90 00:23:50.670 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15221.14 7964.60 47899.71 00:23:50.670 ======================================================== 00:23:50.670 Total : 138.00 0.54 14807.29 261.74 47899.71 00:23:50.670 00:23:50.670 14:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:51.611 Initializing NVMe Controllers 00:23:51.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:51.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:51.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:51.611 Initialization complete. Launching workers. 00:23:51.611 ======================================================== 00:23:51.611 Latency(us) 00:23:51.611 Device Information : IOPS MiB/s Average min max 00:23:51.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11661.00 45.55 2744.37 345.86 8006.96 00:23:51.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3813.00 14.89 8438.25 5449.02 17143.44 00:23:51.611 ======================================================== 00:23:51.611 Total : 15474.00 60.45 4147.41 345.86 17143.44 00:23:51.611 00:23:51.611 14:10:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:51.611 14:10:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:51.611 14:10:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:54.154 Initializing NVMe Controllers 00:23:54.154 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:54.154 Controller IO queue size 128, less than required. 00:23:54.154 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:54.154 Controller IO queue size 128, less than required. 00:23:54.154 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:54.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:54.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:54.154 Initialization complete. Launching workers. 00:23:54.154 ======================================================== 00:23:54.154 Latency(us) 00:23:54.154 Device Information : IOPS MiB/s Average min max 00:23:54.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1818.62 454.65 71665.78 38805.06 114428.27 00:23:54.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 621.00 155.25 212206.82 56683.00 310788.64 00:23:54.154 ======================================================== 00:23:54.154 Total : 2439.62 609.91 107440.39 38805.06 310788.64 00:23:54.154 00:23:54.154 14:10:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:54.415 No valid NVMe controllers or AIO or URING devices found 00:23:54.415 Initializing NVMe Controllers 00:23:54.415 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:54.415 Controller IO queue size 128, less than required. 00:23:54.415 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:54.415 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:54.415 Controller IO queue size 128, less than required. 00:23:54.415 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:54.415 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:54.415 WARNING: Some requested NVMe devices were skipped 00:23:54.415 14:10:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:56.958 Initializing NVMe Controllers 00:23:56.958 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:56.958 Controller IO queue size 128, less than required. 00:23:56.958 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:56.958 Controller IO queue size 128, less than required. 00:23:56.958 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:56.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:56.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:56.958 Initialization complete. Launching workers. 00:23:56.958 00:23:56.958 ==================== 00:23:56.958 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:56.958 TCP transport: 00:23:56.958 polls: 35740 00:23:56.958 idle_polls: 24466 00:23:56.958 sock_completions: 11274 00:23:56.958 nvme_completions: 9053 00:23:56.958 submitted_requests: 13538 00:23:56.958 queued_requests: 1 00:23:56.958 00:23:56.958 ==================== 00:23:56.958 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:56.958 TCP transport: 00:23:56.958 polls: 32900 00:23:56.958 idle_polls: 20534 00:23:56.958 sock_completions: 12366 00:23:56.958 nvme_completions: 7407 00:23:56.958 submitted_requests: 11240 00:23:56.958 queued_requests: 1 00:23:56.959 ======================================================== 00:23:56.959 Latency(us) 00:23:56.959 Device Information : IOPS MiB/s Average min max 00:23:56.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2262.76 565.69 57403.75 26505.25 99582.92 00:23:56.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1851.30 462.83 70243.66 25239.56 120217.38 00:23:56.959 ======================================================== 00:23:56.959 Total : 4114.06 1028.52 63181.63 25239.56 120217.38 00:23:56.959 00:23:56.959 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:56.959 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:57.218 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:57.218 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:57.218 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:57.218 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:57.218 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:57.218 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:57.218 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:57.218 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:57.218 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:57.218 rmmod nvme_tcp 00:23:57.218 rmmod nvme_fabrics 00:23:57.218 rmmod nvme_keyring 00:23:57.218 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:57.218 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:57.219 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:57.219 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1119782 ']' 00:23:57.219 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1119782 00:23:57.219 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1119782 ']' 00:23:57.219 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1119782 00:23:57.219 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:57.219 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:57.219 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1119782 00:23:57.478 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:57.478 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:57.478 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1119782' 00:23:57.478 killing process with pid 1119782 00:23:57.478 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1119782 00:23:57.478 14:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1119782 00:23:59.387 14:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:59.387 14:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:59.387 14:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:59.387 14:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:59.387 14:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:59.387 14:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:59.387 14:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:59.387 14:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:59.387 14:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:59.387 14:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.387 14:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:59.387 14:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.302 14:10:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:01.302 00:24:01.302 real 0m24.391s 00:24:01.302 user 0m58.920s 00:24:01.302 sys 0m8.614s 00:24:01.302 14:10:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:01.302 14:10:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:01.302 ************************************ 00:24:01.302 END TEST nvmf_perf 00:24:01.302 ************************************ 00:24:01.561 14:10:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:01.561 14:10:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:01.561 14:10:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:01.561 14:10:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.561 ************************************ 00:24:01.561 START TEST nvmf_fio_host 00:24:01.561 ************************************ 00:24:01.561 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:01.561 * Looking for test storage... 00:24:01.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:01.561 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:01.561 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:01.561 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:01.561 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:01.561 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:01.561 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:01.561 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:01.561 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:01.561 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:01.561 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:01.561 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:01.561 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:01.561 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:01.561 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:01.561 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:01.561 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:01.561 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:01.561 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:01.561 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:01.562 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:01.562 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:01.562 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:01.562 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:01.562 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:01.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.823 --rc genhtml_branch_coverage=1 00:24:01.823 --rc genhtml_function_coverage=1 00:24:01.823 --rc genhtml_legend=1 00:24:01.823 --rc geninfo_all_blocks=1 00:24:01.823 --rc geninfo_unexecuted_blocks=1 00:24:01.823 00:24:01.823 ' 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:01.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.823 --rc genhtml_branch_coverage=1 00:24:01.823 --rc genhtml_function_coverage=1 00:24:01.823 --rc genhtml_legend=1 00:24:01.823 --rc geninfo_all_blocks=1 00:24:01.823 --rc geninfo_unexecuted_blocks=1 00:24:01.823 00:24:01.823 ' 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:01.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.823 --rc genhtml_branch_coverage=1 00:24:01.823 --rc genhtml_function_coverage=1 00:24:01.823 --rc genhtml_legend=1 00:24:01.823 --rc geninfo_all_blocks=1 00:24:01.823 --rc geninfo_unexecuted_blocks=1 00:24:01.823 00:24:01.823 ' 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:01.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.823 --rc genhtml_branch_coverage=1 00:24:01.823 --rc genhtml_function_coverage=1 00:24:01.823 --rc genhtml_legend=1 00:24:01.823 --rc geninfo_all_blocks=1 00:24:01.823 --rc geninfo_unexecuted_blocks=1 00:24:01.823 00:24:01.823 ' 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.823 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:01.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:01.824 14:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:09.973 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:09.973 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.973 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:09.974 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:09.974 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:09.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:24:09.974 00:24:09.974 --- 10.0.0.2 ping statistics --- 00:24:09.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.974 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:09.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:24:09.974 00:24:09.974 --- 10.0.0.1 ping statistics --- 00:24:09.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.974 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1126770 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1126770 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1126770 ']' 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:09.974 14:11:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.974 [2024-10-30 14:11:07.450397] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:24:09.974 [2024-10-30 14:11:07.450464] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.974 [2024-10-30 14:11:07.550364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:09.974 [2024-10-30 14:11:07.603283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.974 [2024-10-30 14:11:07.603342] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.974 [2024-10-30 14:11:07.603350] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.974 [2024-10-30 14:11:07.603358] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.974 [2024-10-30 14:11:07.603365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.974 [2024-10-30 14:11:07.605409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.974 [2024-10-30 14:11:07.605570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.974 [2024-10-30 14:11:07.605731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.974 [2024-10-30 14:11:07.605731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:10.235 14:11:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.235 14:11:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:10.235 14:11:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:10.235 [2024-10-30 14:11:08.445881] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.235 14:11:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:10.235 14:11:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:10.235 14:11:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.235 14:11:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:10.496 Malloc1 00:24:10.496 14:11:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:10.757 14:11:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:11.018 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:11.018 [2024-10-30 14:11:09.311664] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.279 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:11.279 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:11.279 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:11.279 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:11.279 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:11.279 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:11.279 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:11.279 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:11.279 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:11.279 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:11.279 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:11.279 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:11.279 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:11.279 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:11.568 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:11.568 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:11.568 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:11.568 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:11.568 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:11.568 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:11.568 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:11.568 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:11.568 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:11.568 14:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:11.827 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:11.827 fio-3.35 00:24:11.827 Starting 1 thread 00:24:14.393 00:24:14.393 test: (groupid=0, jobs=1): err= 0: pid=1127386: Wed Oct 30 14:11:12 2024 00:24:14.393 read: IOPS=13.7k, BW=53.6MiB/s (56.2MB/s)(107MiB/2004msec) 00:24:14.393 slat (usec): min=2, max=302, avg= 2.15, stdev= 2.52 00:24:14.393 clat (usec): min=3410, max=8604, avg=5133.04, stdev=371.22 00:24:14.393 lat (usec): min=3451, max=8606, avg=5135.19, stdev=371.38 00:24:14.393 clat percentiles (usec): 00:24:14.393 | 1.00th=[ 4228], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4883], 00:24:14.393 | 30.00th=[ 4948], 40.00th=[ 5080], 50.00th=[ 5145], 60.00th=[ 5211], 00:24:14.393 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5669], 00:24:14.393 | 99.00th=[ 5997], 99.50th=[ 6325], 99.90th=[ 7635], 99.95th=[ 8029], 00:24:14.393 | 99.99th=[ 8586] 00:24:14.393 bw ( KiB/s): min=53288, max=55544, per=99.96%, avg=54878.00, stdev=1066.56, samples=4 00:24:14.393 iops : min=13322, max=13888, avg=13719.50, stdev=266.83, samples=4 00:24:14.393 write: IOPS=13.7k, BW=53.5MiB/s (56.1MB/s)(107MiB/2004msec); 0 zone resets 00:24:14.393 slat (usec): min=2, max=274, avg= 2.22, stdev= 1.82 00:24:14.393 clat (usec): min=2829, max=8203, avg=4161.13, stdev=320.65 00:24:14.393 lat (usec): min=2832, max=8205, avg=4163.35, stdev=320.89 00:24:14.393 clat percentiles (usec): 00:24:14.393 | 1.00th=[ 3425], 5.00th=[ 3720], 10.00th=[ 3818], 20.00th=[ 3916], 00:24:14.393 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4228], 00:24:14.393 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4621], 00:24:14.393 | 99.00th=[ 4883], 99.50th=[ 5669], 99.90th=[ 6456], 99.95th=[ 7111], 00:24:14.393 | 99.99th=[ 7635] 00:24:14.393 bw ( KiB/s): min=53792, max=55384, per=99.98%, avg=54792.00, stdev=692.24, samples=4 00:24:14.393 iops : min=13448, max=13846, avg=13698.00, stdev=173.06, samples=4 00:24:14.393 lat (msec) : 4=14.47%, 10=85.53% 00:24:14.393 cpu : usr=75.69%, sys=23.12%, ctx=35, majf=0, minf=17 00:24:14.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:14.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:14.393 issued rwts: total=27506,27456,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.393 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:14.393 00:24:14.393 Run status group 0 (all jobs): 00:24:14.393 READ: bw=53.6MiB/s (56.2MB/s), 53.6MiB/s-53.6MiB/s (56.2MB/s-56.2MB/s), io=107MiB (113MB), run=2004-2004msec 00:24:14.394 WRITE: bw=53.5MiB/s (56.1MB/s), 53.5MiB/s-53.5MiB/s (56.1MB/s-56.1MB/s), io=107MiB (112MB), run=2004-2004msec 00:24:14.394 14:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:14.394 14:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:14.394 14:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:14.394 14:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:14.394 14:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:14.394 14:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:14.394 14:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:14.394 14:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:14.394 14:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:14.394 14:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:14.394 14:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:14.394 14:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:14.394 14:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:14.394 14:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:14.394 14:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:14.394 14:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:14.394 14:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:14.394 14:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:14.394 14:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:14.394 14:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:14.394 14:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:14.394 14:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:14.654 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:14.654 fio-3.35 00:24:14.654 Starting 1 thread 00:24:17.220 00:24:17.220 test: (groupid=0, jobs=1): err= 0: pid=1128209: Wed Oct 30 14:11:15 2024 00:24:17.220 read: IOPS=9640, BW=151MiB/s (158MB/s)(302MiB/2005msec) 00:24:17.220 slat (usec): min=3, max=110, avg= 3.62, stdev= 1.57 00:24:17.220 clat (usec): min=1833, max=14768, avg=8157.13, stdev=1937.59 00:24:17.220 lat (usec): min=1836, max=14772, avg=8160.75, stdev=1937.72 00:24:17.220 clat percentiles (usec): 00:24:17.220 | 1.00th=[ 4113], 5.00th=[ 5211], 10.00th=[ 5735], 20.00th=[ 6390], 00:24:17.220 | 30.00th=[ 6915], 40.00th=[ 7504], 50.00th=[ 8029], 60.00th=[ 8586], 00:24:17.220 | 70.00th=[ 9372], 80.00th=[10159], 90.00th=[10552], 95.00th=[11076], 00:24:17.220 | 99.00th=[12649], 99.50th=[13304], 99.90th=[14353], 99.95th=[14615], 00:24:17.220 | 99.99th=[14746] 00:24:17.220 bw ( KiB/s): min=70880, max=81149, per=49.27%, avg=76007.25, stdev=4406.97, samples=4 00:24:17.220 iops : min= 4430, max= 5071, avg=4750.25, stdev=275.12, samples=4 00:24:17.220 write: IOPS=5830, BW=91.1MiB/s (95.5MB/s)(156MiB/1712msec); 0 zone resets 00:24:17.220 slat (usec): min=39, max=367, avg=40.90, stdev= 7.12 00:24:17.220 clat (usec): min=3519, max=14723, avg=8993.20, stdev=1319.93 00:24:17.220 lat (usec): min=3559, max=14764, avg=9034.09, stdev=1321.44 00:24:17.220 clat percentiles (usec): 00:24:17.220 | 1.00th=[ 6390], 5.00th=[ 7111], 10.00th=[ 7504], 20.00th=[ 7898], 00:24:17.220 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9241], 00:24:17.220 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10683], 95.00th=[11207], 00:24:17.220 | 99.00th=[12649], 99.50th=[13435], 99.90th=[14353], 99.95th=[14615], 00:24:17.220 | 99.99th=[14746] 00:24:17.220 bw ( KiB/s): min=74912, max=84407, per=85.25%, avg=79517.75, stdev=4251.06, samples=4 00:24:17.220 iops : min= 4682, max= 5275, avg=4969.75, stdev=265.52, samples=4 00:24:17.220 lat (msec) : 2=0.03%, 4=0.54%, 10=77.46%, 20=21.97% 00:24:17.220 cpu : usr=85.33%, sys=13.42%, ctx=15, majf=0, minf=37 00:24:17.220 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:17.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:17.220 issued rwts: total=19330,9981,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.220 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:17.220 00:24:17.221 Run status group 0 (all jobs): 00:24:17.221 READ: bw=151MiB/s (158MB/s), 151MiB/s-151MiB/s (158MB/s-158MB/s), io=302MiB (317MB), run=2005-2005msec 00:24:17.221 WRITE: bw=91.1MiB/s (95.5MB/s), 91.1MiB/s-91.1MiB/s (95.5MB/s-95.5MB/s), io=156MiB (164MB), run=1712-1712msec 00:24:17.221 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:17.221 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:17.221 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:17.221 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:17.221 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:17.221 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:17.221 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:17.221 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:17.221 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:17.221 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:17.221 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:17.221 rmmod nvme_tcp 00:24:17.221 rmmod nvme_fabrics 00:24:17.221 rmmod nvme_keyring 00:24:17.221 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:17.221 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:17.221 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:17.221 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1126770 ']' 00:24:17.221 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1126770 00:24:17.221 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1126770 ']' 00:24:17.221 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1126770 00:24:17.221 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:24:17.221 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.482 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1126770 00:24:17.482 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:17.482 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:17.482 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1126770' 00:24:17.482 killing process with pid 1126770 00:24:17.482 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1126770 00:24:17.482 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1126770 00:24:17.482 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:17.482 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:17.482 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:17.482 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:17.482 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:17.482 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:17.482 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:17.482 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:17.482 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:17.482 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.482 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.482 14:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.029 14:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:20.029 00:24:20.029 real 0m18.114s 00:24:20.029 user 0m59.863s 00:24:20.029 sys 0m8.301s 00:24:20.029 14:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:20.029 14:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.029 ************************************ 00:24:20.029 END TEST nvmf_fio_host 00:24:20.029 ************************************ 00:24:20.029 14:11:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:20.029 14:11:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:20.029 14:11:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:20.029 14:11:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.029 ************************************ 00:24:20.029 START TEST nvmf_failover 00:24:20.029 ************************************ 00:24:20.029 14:11:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:20.029 * Looking for test storage... 00:24:20.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:20.029 14:11:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:20.029 14:11:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:24:20.029 14:11:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:20.029 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:20.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.029 --rc genhtml_branch_coverage=1 00:24:20.029 --rc genhtml_function_coverage=1 00:24:20.029 --rc genhtml_legend=1 00:24:20.029 --rc geninfo_all_blocks=1 00:24:20.029 --rc geninfo_unexecuted_blocks=1 00:24:20.029 00:24:20.029 ' 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:20.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.030 --rc genhtml_branch_coverage=1 00:24:20.030 --rc genhtml_function_coverage=1 00:24:20.030 --rc genhtml_legend=1 00:24:20.030 --rc geninfo_all_blocks=1 00:24:20.030 --rc geninfo_unexecuted_blocks=1 00:24:20.030 00:24:20.030 ' 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:20.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.030 --rc genhtml_branch_coverage=1 00:24:20.030 --rc genhtml_function_coverage=1 00:24:20.030 --rc genhtml_legend=1 00:24:20.030 --rc geninfo_all_blocks=1 00:24:20.030 --rc geninfo_unexecuted_blocks=1 00:24:20.030 00:24:20.030 ' 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:20.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.030 --rc genhtml_branch_coverage=1 00:24:20.030 --rc genhtml_function_coverage=1 00:24:20.030 --rc genhtml_legend=1 00:24:20.030 --rc geninfo_all_blocks=1 00:24:20.030 --rc geninfo_unexecuted_blocks=1 00:24:20.030 00:24:20.030 ' 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:20.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:20.030 14:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:28.170 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:28.170 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:28.170 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:28.170 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:28.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:24:28.170 00:24:28.170 --- 10.0.0.2 ping statistics --- 00:24:28.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.170 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:24:28.170 00:24:28.170 --- 10.0.0.1 ping statistics --- 00:24:28.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.170 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:28.170 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:28.171 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.171 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:28.171 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:28.171 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:28.171 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:28.171 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:28.171 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:28.171 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1132872 00:24:28.171 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1132872 00:24:28.171 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:28.171 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1132872 ']' 00:24:28.171 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.171 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.171 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.171 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.171 14:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:28.171 [2024-10-30 14:11:25.595939] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:24:28.171 [2024-10-30 14:11:25.596008] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.171 [2024-10-30 14:11:25.695055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:28.171 [2024-10-30 14:11:25.746809] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.171 [2024-10-30 14:11:25.746860] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.171 [2024-10-30 14:11:25.746869] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.171 [2024-10-30 14:11:25.746876] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.171 [2024-10-30 14:11:25.746883] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.171 [2024-10-30 14:11:25.748675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.171 [2024-10-30 14:11:25.748818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.171 [2024-10-30 14:11:25.748845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.171 14:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.171 14:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:28.171 14:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:28.171 14:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:28.171 14:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:28.432 14:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.432 14:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:28.432 [2024-10-30 14:11:26.633059] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.432 14:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:28.693 Malloc0 00:24:28.693 14:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:28.953 14:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:29.214 14:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:29.214 [2024-10-30 14:11:27.457019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.214 14:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:29.474 [2024-10-30 14:11:27.657625] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:29.474 14:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:29.734 [2024-10-30 14:11:27.838160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:29.734 14:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:29.734 14:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1133238 00:24:29.735 14:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:29.735 14:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1133238 /var/tmp/bdevperf.sock 00:24:29.735 14:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1133238 ']' 00:24:29.735 14:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:29.735 14:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:29.735 14:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:29.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:29.735 14:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:29.735 14:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:30.673 14:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:30.673 14:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:30.673 14:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:30.933 NVMe0n1 00:24:30.933 14:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:31.192 00:24:31.192 14:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1133573 00:24:31.192 14:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:31.192 14:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:32.132 14:11:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:32.393 [2024-10-30 14:11:30.454526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f290 is same with the state(6) to be set 00:24:32.393 [2024-10-30 14:11:30.454571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f290 is same with the state(6) to be set 00:24:32.393 [2024-10-30 14:11:30.454577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f290 is same with the state(6) to be set 00:24:32.393 [2024-10-30 14:11:30.454582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f290 is same with the state(6) to be set 00:24:32.393 [2024-10-30 14:11:30.454587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f290 is same with the state(6) to be set 00:24:32.393 [2024-10-30 14:11:30.454592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f290 is same with the state(6) to be set 00:24:32.393 [2024-10-30 14:11:30.454597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f290 is same with the state(6) to be set 00:24:32.393 [2024-10-30 14:11:30.454601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f290 is same with the state(6) to be set 00:24:32.393 [2024-10-30 14:11:30.454606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f290 is same with the state(6) to be set 00:24:32.393 [2024-10-30 14:11:30.454610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f290 is same with the state(6) to be set 00:24:32.393 [2024-10-30 14:11:30.454615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f290 is same with the state(6) to be set 00:24:32.393 [2024-10-30 14:11:30.454625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f290 is same with the state(6) to be set 00:24:32.393 [2024-10-30 14:11:30.454630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f290 is same with the state(6) to be set 00:24:32.393 [2024-10-30 14:11:30.454634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f290 is same with the state(6) to be set 00:24:32.393 [2024-10-30 14:11:30.454638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f290 is same with the state(6) to be set 00:24:32.393 [2024-10-30 14:11:30.454643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f290 is same with the state(6) to be set 00:24:32.393 [2024-10-30 14:11:30.454648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f290 is same with the state(6) to be set 00:24:32.393 [2024-10-30 14:11:30.454652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f290 is same with the state(6) to be set 00:24:32.393 14:11:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:35.695 14:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:35.695 00:24:35.696 14:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:35.696 [2024-10-30 14:11:33.953183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140040 is same with the state(6) to be set 00:24:35.696 [2024-10-30 14:11:33.953221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140040 is same with the state(6) to be set 00:24:35.696 [2024-10-30 14:11:33.953227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140040 is same with the state(6) to be set 00:24:35.696 [2024-10-30 14:11:33.953232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140040 is same with the state(6) to be set 00:24:35.696 [2024-10-30 14:11:33.953236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140040 is same with the state(6) to be set 00:24:35.696 [2024-10-30 14:11:33.953241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140040 is same with the state(6) to be set 00:24:35.696 [2024-10-30 14:11:33.953245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140040 is same with the state(6) to be set 00:24:35.696 [2024-10-30 14:11:33.953250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140040 is same with the state(6) to be set 00:24:35.696 [2024-10-30 14:11:33.953255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140040 is same with the state(6) to be set 00:24:35.696 [2024-10-30 14:11:33.953259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140040 is same with the state(6) to be set 00:24:35.696 [2024-10-30 14:11:33.953264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140040 is same with the state(6) to be set 00:24:35.696 [2024-10-30 14:11:33.953268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140040 is same with the state(6) to be set 00:24:35.696 [2024-10-30 14:11:33.953273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140040 is same with the state(6) to be set 00:24:35.696 [2024-10-30 14:11:33.953278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140040 is same with the state(6) to be set 00:24:35.696 [2024-10-30 14:11:33.953282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140040 is same with the state(6) to be set 00:24:35.696 14:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:39.250 14:11:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:39.250 [2024-10-30 14:11:37.143166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.250 14:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:40.191 14:11:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:40.191 [2024-10-30 14:11:38.331322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 [2024-10-30 14:11:38.331595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f90 is same with the state(6) to be set 00:24:40.191 14:11:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1133573 00:24:46.782 { 00:24:46.782 "results": [ 00:24:46.782 { 00:24:46.782 "job": "NVMe0n1", 00:24:46.782 "core_mask": "0x1", 00:24:46.782 "workload": "verify", 00:24:46.782 "status": "finished", 00:24:46.782 "verify_range": { 00:24:46.782 "start": 0, 00:24:46.782 "length": 16384 00:24:46.782 }, 00:24:46.782 "queue_depth": 128, 00:24:46.782 "io_size": 4096, 00:24:46.782 "runtime": 15.004444, 00:24:46.782 "iops": 12423.98585379105, 00:24:46.782 "mibps": 48.53119474137129, 00:24:46.782 "io_failed": 8156, 00:24:46.782 "io_timeout": 0, 00:24:46.782 "avg_latency_us": 9848.92877698458, 00:24:46.782 "min_latency_us": 539.3066666666666, 00:24:46.782 "max_latency_us": 29272.746666666666 00:24:46.782 } 00:24:46.782 ], 00:24:46.782 "core_count": 1 00:24:46.782 } 00:24:46.782 14:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1133238 00:24:46.782 14:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1133238 ']' 00:24:46.782 14:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1133238 00:24:46.782 14:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:46.782 14:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:46.782 14:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1133238 00:24:46.782 14:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:46.782 14:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:46.782 14:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1133238' 00:24:46.782 killing process with pid 1133238 00:24:46.782 14:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1133238 00:24:46.782 14:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1133238 00:24:46.782 14:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:46.782 [2024-10-30 14:11:27.912503] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:24:46.782 [2024-10-30 14:11:27.912560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133238 ] 00:24:46.782 [2024-10-30 14:11:28.001273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.782 [2024-10-30 14:11:28.036830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.782 Running I/O for 15 seconds... 00:24:46.782 11068.00 IOPS, 43.23 MiB/s [2024-10-30T13:11:45.081Z] [2024-10-30 14:11:30.456950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.782 [2024-10-30 14:11:30.456988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.782 [2024-10-30 14:11:30.457014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.782 [2024-10-30 14:11:30.457032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.782 [2024-10-30 14:11:30.457049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.782 [2024-10-30 14:11:30.457066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.782 [2024-10-30 14:11:30.457083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.782 [2024-10-30 14:11:30.457100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.782 [2024-10-30 14:11:30.457117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.782 [2024-10-30 14:11:30.457134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.782 [2024-10-30 14:11:30.457152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.782 [2024-10-30 14:11:30.457170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.782 [2024-10-30 14:11:30.457193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.782 [2024-10-30 14:11:30.457211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.782 [2024-10-30 14:11:30.457228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.782 [2024-10-30 14:11:30.457246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.782 [2024-10-30 14:11:30.457265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.782 [2024-10-30 14:11:30.457283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.782 [2024-10-30 14:11:30.457303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.782 [2024-10-30 14:11:30.457321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.782 [2024-10-30 14:11:30.457340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.782 [2024-10-30 14:11:30.457358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.782 [2024-10-30 14:11:30.457375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.782 [2024-10-30 14:11:30.457391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.782 [2024-10-30 14:11:30.457410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.782 [2024-10-30 14:11:30.457427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.782 [2024-10-30 14:11:30.457437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.782 [2024-10-30 14:11:30.457444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.783 [2024-10-30 14:11:30.457843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.783 [2024-10-30 14:11:30.457862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.783 [2024-10-30 14:11:30.457879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.783 [2024-10-30 14:11:30.457895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.783 [2024-10-30 14:11:30.457912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.783 [2024-10-30 14:11:30.457928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.783 [2024-10-30 14:11:30.457944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.457987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.457995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.458004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.458012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.458021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.458028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.458038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.458045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.458054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.458063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.458073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.458080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.458089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.458097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.458106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.458114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.458123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.458131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.458140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.458147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.783 [2024-10-30 14:11:30.458156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.783 [2024-10-30 14:11:30.458163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.784 [2024-10-30 14:11:30.458181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.784 [2024-10-30 14:11:30.458197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.784 [2024-10-30 14:11:30.458213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.784 [2024-10-30 14:11:30.458230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.784 [2024-10-30 14:11:30.458248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.784 [2024-10-30 14:11:30.458264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.784 [2024-10-30 14:11:30.458283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.784 [2024-10-30 14:11:30.458300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.784 [2024-10-30 14:11:30.458316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.784 [2024-10-30 14:11:30.458346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94984 len:8 PRP1 0x0 PRP2 0x0 00:24:46.784 [2024-10-30 14:11:30.458354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.784 [2024-10-30 14:11:30.458402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.784 [2024-10-30 14:11:30.458418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.784 [2024-10-30 14:11:30.458434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.784 [2024-10-30 14:11:30.458450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25f70 is same with the state(6) to be set 00:24:46.784 [2024-10-30 14:11:30.458633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.784 [2024-10-30 14:11:30.458641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.784 [2024-10-30 14:11:30.458650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94992 len:8 PRP1 0x0 PRP2 0x0 00:24:46.784 [2024-10-30 14:11:30.458658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.784 [2024-10-30 14:11:30.458673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.784 [2024-10-30 14:11:30.458679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95000 len:8 PRP1 0x0 PRP2 0x0 00:24:46.784 [2024-10-30 14:11:30.458686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.784 [2024-10-30 14:11:30.458699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.784 [2024-10-30 14:11:30.458707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95008 len:8 PRP1 0x0 PRP2 0x0 00:24:46.784 [2024-10-30 14:11:30.458715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.784 [2024-10-30 14:11:30.458728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.784 [2024-10-30 14:11:30.458734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95016 len:8 PRP1 0x0 PRP2 0x0 00:24:46.784 [2024-10-30 14:11:30.458741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.784 [2024-10-30 14:11:30.458760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.784 [2024-10-30 14:11:30.458767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95024 len:8 PRP1 0x0 PRP2 0x0 00:24:46.784 [2024-10-30 14:11:30.458775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.784 [2024-10-30 14:11:30.458788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.784 [2024-10-30 14:11:30.458794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95032 len:8 PRP1 0x0 PRP2 0x0 00:24:46.784 [2024-10-30 14:11:30.458802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.784 [2024-10-30 14:11:30.458815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.784 [2024-10-30 14:11:30.458822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95040 len:8 PRP1 0x0 PRP2 0x0 00:24:46.784 [2024-10-30 14:11:30.458829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.784 [2024-10-30 14:11:30.458842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.784 [2024-10-30 14:11:30.458848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95048 len:8 PRP1 0x0 PRP2 0x0 00:24:46.784 [2024-10-30 14:11:30.458855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.784 [2024-10-30 14:11:30.458868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.784 [2024-10-30 14:11:30.458875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95056 len:8 PRP1 0x0 PRP2 0x0 00:24:46.784 [2024-10-30 14:11:30.458882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.784 [2024-10-30 14:11:30.458895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.784 [2024-10-30 14:11:30.458901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95064 len:8 PRP1 0x0 PRP2 0x0 00:24:46.784 [2024-10-30 14:11:30.458909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.784 [2024-10-30 14:11:30.458925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.784 [2024-10-30 14:11:30.458931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95072 len:8 PRP1 0x0 PRP2 0x0 00:24:46.784 [2024-10-30 14:11:30.458939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.784 [2024-10-30 14:11:30.458951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.784 [2024-10-30 14:11:30.458958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95080 len:8 PRP1 0x0 PRP2 0x0 00:24:46.784 [2024-10-30 14:11:30.458965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.458972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.784 [2024-10-30 14:11:30.458978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.784 [2024-10-30 14:11:30.458985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95088 len:8 PRP1 0x0 PRP2 0x0 00:24:46.784 [2024-10-30 14:11:30.458992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.459000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.784 [2024-10-30 14:11:30.459006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.784 [2024-10-30 14:11:30.459012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95096 len:8 PRP1 0x0 PRP2 0x0 00:24:46.784 [2024-10-30 14:11:30.459020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.459028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.784 [2024-10-30 14:11:30.459034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.784 [2024-10-30 14:11:30.459040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95104 len:8 PRP1 0x0 PRP2 0x0 00:24:46.784 [2024-10-30 14:11:30.459047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.459055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.784 [2024-10-30 14:11:30.459060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.784 [2024-10-30 14:11:30.459066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95112 len:8 PRP1 0x0 PRP2 0x0 00:24:46.784 [2024-10-30 14:11:30.459073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.784 [2024-10-30 14:11:30.459081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.784 [2024-10-30 14:11:30.459087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.459093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95120 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.459100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.459108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.785 [2024-10-30 14:11:30.459113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.459120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95128 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.459127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.459137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.785 [2024-10-30 14:11:30.459143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.459149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95136 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.459156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.459163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.785 [2024-10-30 14:11:30.459169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.459174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95864 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.459182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.459191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.785 [2024-10-30 14:11:30.459197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.459203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95872 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.459210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.459218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.785 [2024-10-30 14:11:30.459223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.459229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95880 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.459238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.459245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.785 [2024-10-30 14:11:30.459251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.459257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95888 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.459264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.459272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.785 [2024-10-30 14:11:30.459277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.459284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95896 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.459291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.459299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.785 [2024-10-30 14:11:30.459304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.459310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95144 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.459317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.459325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.785 [2024-10-30 14:11:30.459330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.459336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95152 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.459346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.459353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.785 [2024-10-30 14:11:30.459359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.459364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95160 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.459371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.459379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.785 [2024-10-30 14:11:30.459384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.459391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95168 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.459398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.459406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.785 [2024-10-30 14:11:30.459411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.459417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95176 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.459425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.459432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.785 [2024-10-30 14:11:30.459438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.459445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95184 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.459453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.459460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.785 [2024-10-30 14:11:30.459466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.459472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95192 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.459479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.459486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.785 [2024-10-30 14:11:30.459492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.459498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95904 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.459506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.459513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.785 [2024-10-30 14:11:30.459519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.459525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95912 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.459532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.459540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.785 [2024-10-30 14:11:30.459546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.459553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95200 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.459561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.459568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.785 [2024-10-30 14:11:30.459573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.459579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95208 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.459586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.459594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.785 [2024-10-30 14:11:30.459600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.459606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95216 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.459613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.459621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.785 [2024-10-30 14:11:30.459626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.459632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95224 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.459639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.459650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.785 [2024-10-30 14:11:30.459656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.459663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95232 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.459673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.459681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.785 [2024-10-30 14:11:30.459687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.469964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95240 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.469994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.470011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.785 [2024-10-30 14:11:30.470017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.470024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95248 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.470031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.470039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.785 [2024-10-30 14:11:30.470045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.785 [2024-10-30 14:11:30.470051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95256 len:8 PRP1 0x0 PRP2 0x0 00:24:46.785 [2024-10-30 14:11:30.470058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.785 [2024-10-30 14:11:30.470071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.786 [2024-10-30 14:11:30.470077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.786 [2024-10-30 14:11:30.470083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95920 len:8 PRP1 0x0 PRP2 0x0 00:24:46.786 [2024-10-30 14:11:30.470090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.786 [2024-10-30 14:11:30.470098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.786 [2024-10-30 14:11:30.470103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.786 [2024-10-30 14:11:30.470110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95264 len:8 PRP1 0x0 PRP2 0x0 00:24:46.786 [2024-10-30 14:11:30.470117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.786 [2024-10-30 14:11:30.470125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.786 [2024-10-30 14:11:30.470131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.786 [2024-10-30 14:11:30.470137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95272 len:8 PRP1 0x0 PRP2 0x0 00:24:46.786 [2024-10-30 14:11:30.470144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.786 [2024-10-30 14:11:30.470151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.786 [2024-10-30 14:11:30.470157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.786 [2024-10-30 14:11:30.470163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95280 len:8 PRP1 0x0 PRP2 0x0 00:24:46.786 [2024-10-30 14:11:30.470170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.786 [2024-10-30 14:11:30.470178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.786 [2024-10-30 14:11:30.470183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.786 [2024-10-30 14:11:30.470189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95288 len:8 PRP1 0x0 PRP2 0x0 00:24:46.786 [2024-10-30 14:11:30.470197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.786 [2024-10-30 14:11:30.470204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.786 [2024-10-30 14:11:30.470210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.786 [2024-10-30 14:11:30.470216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95296 len:8 PRP1 0x0 PRP2 0x0 00:24:46.786 [2024-10-30 14:11:30.470224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.786 [2024-10-30 14:11:30.470231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.786 [2024-10-30 14:11:30.470237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.786 [2024-10-30 14:11:30.470242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95304 len:8 PRP1 0x0 PRP2 0x0 00:24:46.786 [2024-10-30 14:11:30.470249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.786 [2024-10-30 14:11:30.470257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.786 [2024-10-30 14:11:30.470263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.786 [2024-10-30 14:11:30.470269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95312 len:8 PRP1 0x0 PRP2 0x0 00:24:46.786 [2024-10-30 14:11:30.470278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.786 [2024-10-30 14:11:30.470285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.786 [2024-10-30 14:11:30.470291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.786 [2024-10-30 14:11:30.470297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95320 len:8 PRP1 0x0 PRP2 0x0 00:24:46.786 [2024-10-30 14:11:30.470305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.786 [2024-10-30 14:11:30.470313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.786 [2024-10-30 14:11:30.470319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.786 [2024-10-30 14:11:30.470325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95328 len:8 PRP1 0x0 PRP2 0x0 00:24:46.786 [2024-10-30 14:11:30.470332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.786 [2024-10-30 14:11:30.470339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.786 [2024-10-30 14:11:30.470344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.786 [2024-10-30 14:11:30.470351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95336 len:8 PRP1 0x0 PRP2 0x0 00:24:46.786 [2024-10-30 14:11:30.470359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.786 [2024-10-30 14:11:30.470367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.786 [2024-10-30 14:11:30.470372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.786 [2024-10-30 14:11:30.470378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95344 len:8 PRP1 0x0 PRP2 0x0 00:24:46.786 [2024-10-30 14:11:30.470385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.786 [2024-10-30 14:11:30.470394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.786 [2024-10-30 14:11:30.470401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.786 [2024-10-30 14:11:30.470407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95352 len:8 PRP1 0x0 PRP2 0x0 00:24:46.786 [2024-10-30 14:11:30.470416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.786 [2024-10-30 14:11:30.470424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.786 [2024-10-30 14:11:30.470429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.786 [2024-10-30 14:11:30.470436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95360 len:8 PRP1 0x0 PRP2 0x0 00:24:46.786 [2024-10-30 14:11:30.470443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.786 [2024-10-30 14:11:30.470451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.786 [2024-10-30 14:11:30.470457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.786 [2024-10-30 14:11:30.470463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95368 len:8 PRP1 0x0 PRP2 0x0 00:24:46.786 [2024-10-30 14:11:30.470470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.786 [2024-10-30 14:11:30.470478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.786 [2024-10-30 14:11:30.470483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.786 [2024-10-30 14:11:30.470491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95376 len:8 PRP1 0x0 PRP2 0x0 00:24:46.786 [2024-10-30 14:11:30.470499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.786 [2024-10-30 14:11:30.470507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.786 [2024-10-30 14:11:30.470512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.786 [2024-10-30 14:11:30.470518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95384 len:8 PRP1 0x0 PRP2 0x0 00:24:46.786 [2024-10-30 14:11:30.470525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.786 [2024-10-30 14:11:30.470533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.786 [2024-10-30 14:11:30.470539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.786 [2024-10-30 14:11:30.470545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94904 len:8 PRP1 0x0 PRP2 0x0 00:24:46.786 [2024-10-30 14:11:30.470553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.786 [2024-10-30 14:11:30.470560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.786 [2024-10-30 14:11:30.470565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.786 [2024-10-30 14:11:30.470571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94912 len:8 PRP1 0x0 PRP2 0x0 00:24:46.786 [2024-10-30 14:11:30.470578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.786 [2024-10-30 14:11:30.470587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.786 [2024-10-30 14:11:30.470593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.786 [2024-10-30 14:11:30.470599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95392 len:8 PRP1 0x0 PRP2 0x0 00:24:46.786 [2024-10-30 14:11:30.470606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.786 [2024-10-30 14:11:30.470614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.786 [2024-10-30 14:11:30.470619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.786 [2024-10-30 14:11:30.470625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95400 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.470633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.470641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.470647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.470653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95408 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.470660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.470667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.470673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.470680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95416 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.470687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.470695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.470702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.470708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95424 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.470715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.470722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.470729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.470735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95432 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.470742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.470757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.470763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.470769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95440 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.470776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.470785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.470791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.470797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95448 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.470804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.470812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.470817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.470824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95456 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.470831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.470839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.470844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.470851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95464 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.470858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.470867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.470873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.470879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95472 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.470887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.470894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.470900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.470906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95480 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.470914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.470924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.470929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.470936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95488 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.470943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.470951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.470956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.470963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95496 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.470971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.470979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.470984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.470990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95504 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.470997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.471005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.471011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.471017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95512 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.471024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.471032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.471037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.471043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95520 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.471051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.471058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.471064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.471070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95528 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.471077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.471085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.471091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.471097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95536 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.471105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.471113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.471118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.471125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95544 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.471134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.471141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.471147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.471153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95552 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.471161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.471169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.471174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.471180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95560 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.471187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.471195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.471201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.471208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95568 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.471216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.471223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.471229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.471234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95576 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.471242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.471250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.471256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.471262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95584 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.471269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.471276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.471283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.471290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95592 len:8 PRP1 0x0 PRP2 0x0 00:24:46.787 [2024-10-30 14:11:30.471298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.787 [2024-10-30 14:11:30.471306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.787 [2024-10-30 14:11:30.471311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.787 [2024-10-30 14:11:30.471317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95600 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.471325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.471332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.471342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.788 [2024-10-30 14:11:30.471348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95608 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.471355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.471363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.471369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.788 [2024-10-30 14:11:30.471375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95616 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.471383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.471391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.471397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.788 [2024-10-30 14:11:30.471403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95624 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.471410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.471417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.471423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.788 [2024-10-30 14:11:30.471429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95632 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.471436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.471444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.471449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.788 [2024-10-30 14:11:30.471455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95640 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.471462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.471469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.471475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.788 [2024-10-30 14:11:30.471482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95648 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.471489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.471497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.471503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.788 [2024-10-30 14:11:30.471508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95656 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.471515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.471523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.471528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.788 [2024-10-30 14:11:30.471534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95664 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.471542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.471551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.471556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.788 [2024-10-30 14:11:30.471563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95672 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.471569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.471577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.471583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.788 [2024-10-30 14:11:30.471590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95680 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.471597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.471605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.471610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.788 [2024-10-30 14:11:30.471616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95688 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.471624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.471632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.471637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.788 [2024-10-30 14:11:30.471643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94920 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.471650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.471658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.471663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.788 [2024-10-30 14:11:30.471670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94928 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.471678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.471685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.471691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.788 [2024-10-30 14:11:30.479342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94936 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.479371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.479385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.479393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.788 [2024-10-30 14:11:30.479399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94944 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.479407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.479415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.479421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.788 [2024-10-30 14:11:30.479428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94952 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.479439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.479448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.479454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.788 [2024-10-30 14:11:30.479460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94960 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.479467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.479475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.479481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.788 [2024-10-30 14:11:30.479487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94968 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.479494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.479501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.479507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.788 [2024-10-30 14:11:30.479513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95696 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.479521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.479528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.479534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.788 [2024-10-30 14:11:30.479541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95704 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.479549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.479557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.479562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.788 [2024-10-30 14:11:30.479568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95712 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.479575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.479583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.479589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.788 [2024-10-30 14:11:30.479595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95720 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.479602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.479610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.479616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.788 [2024-10-30 14:11:30.479622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95728 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.479629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.479636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.479643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.788 [2024-10-30 14:11:30.479651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95736 len:8 PRP1 0x0 PRP2 0x0 00:24:46.788 [2024-10-30 14:11:30.479658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-10-30 14:11:30.479666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.788 [2024-10-30 14:11:30.479672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.789 [2024-10-30 14:11:30.479678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95744 len:8 PRP1 0x0 PRP2 0x0 00:24:46.789 [2024-10-30 14:11:30.479686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:30.479693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.789 [2024-10-30 14:11:30.479699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.789 [2024-10-30 14:11:30.479705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95752 len:8 PRP1 0x0 PRP2 0x0 00:24:46.789 [2024-10-30 14:11:30.479713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:30.479720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.789 [2024-10-30 14:11:30.479726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.789 [2024-10-30 14:11:30.479732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95760 len:8 PRP1 0x0 PRP2 0x0 00:24:46.789 [2024-10-30 14:11:30.479739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:30.479755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.789 [2024-10-30 14:11:30.479761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.789 [2024-10-30 14:11:30.479767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95768 len:8 PRP1 0x0 PRP2 0x0 00:24:46.789 [2024-10-30 14:11:30.479777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:30.479785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.789 [2024-10-30 14:11:30.479790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.789 [2024-10-30 14:11:30.479796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95776 len:8 PRP1 0x0 PRP2 0x0 00:24:46.789 [2024-10-30 14:11:30.479805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:30.479813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.789 [2024-10-30 14:11:30.479818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.789 [2024-10-30 14:11:30.479825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95784 len:8 PRP1 0x0 PRP2 0x0 00:24:46.789 [2024-10-30 14:11:30.479832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:30.479840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.789 [2024-10-30 14:11:30.479846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.789 [2024-10-30 14:11:30.479852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95792 len:8 PRP1 0x0 PRP2 0x0 00:24:46.789 [2024-10-30 14:11:30.479859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:30.479867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.789 [2024-10-30 14:11:30.479875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.789 [2024-10-30 14:11:30.479881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95800 len:8 PRP1 0x0 PRP2 0x0 00:24:46.789 [2024-10-30 14:11:30.479888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:30.479896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.789 [2024-10-30 14:11:30.479901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.789 [2024-10-30 14:11:30.479908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95808 len:8 PRP1 0x0 PRP2 0x0 00:24:46.789 [2024-10-30 14:11:30.479915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:30.479923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.789 [2024-10-30 14:11:30.479928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.789 [2024-10-30 14:11:30.479935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95816 len:8 PRP1 0x0 PRP2 0x0 00:24:46.789 [2024-10-30 14:11:30.479942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:30.479950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.789 [2024-10-30 14:11:30.479955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.789 [2024-10-30 14:11:30.479962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95824 len:8 PRP1 0x0 PRP2 0x0 00:24:46.789 [2024-10-30 14:11:30.479970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:30.479978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.789 [2024-10-30 14:11:30.479984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.789 [2024-10-30 14:11:30.479990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95832 len:8 PRP1 0x0 PRP2 0x0 00:24:46.789 [2024-10-30 14:11:30.479998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:30.480006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.789 [2024-10-30 14:11:30.480012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.789 [2024-10-30 14:11:30.480018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95840 len:8 PRP1 0x0 PRP2 0x0 00:24:46.789 [2024-10-30 14:11:30.480025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:30.480034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.789 [2024-10-30 14:11:30.480039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.789 [2024-10-30 14:11:30.480046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95848 len:8 PRP1 0x0 PRP2 0x0 00:24:46.789 [2024-10-30 14:11:30.480053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:30.480060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.789 [2024-10-30 14:11:30.480066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.789 [2024-10-30 14:11:30.480073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95856 len:8 PRP1 0x0 PRP2 0x0 00:24:46.789 [2024-10-30 14:11:30.480080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:30.480089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.789 [2024-10-30 14:11:30.480095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.789 [2024-10-30 14:11:30.480101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94976 len:8 PRP1 0x0 PRP2 0x0 00:24:46.789 [2024-10-30 14:11:30.480108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:30.480116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.789 [2024-10-30 14:11:30.480121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.789 [2024-10-30 14:11:30.480128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94984 len:8 PRP1 0x0 PRP2 0x0 00:24:46.789 [2024-10-30 14:11:30.480135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:30.480178] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:46.789 [2024-10-30 14:11:30.480195] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:46.789 [2024-10-30 14:11:30.480247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb25f70 (9): Bad file descriptor 00:24:46.789 [2024-10-30 14:11:30.484506] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:46.789 [2024-10-30 14:11:30.564248] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:46.789 10853.00 IOPS, 42.39 MiB/s [2024-10-30T13:11:45.088Z] 11000.00 IOPS, 42.97 MiB/s [2024-10-30T13:11:45.088Z] 11369.50 IOPS, 44.41 MiB/s [2024-10-30T13:11:45.088Z] [2024-10-30 14:11:33.953743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.789 [2024-10-30 14:11:33.953779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:33.953792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.789 [2024-10-30 14:11:33.953798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:33.953806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:51752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.789 [2024-10-30 14:11:33.953811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:33.953818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.789 [2024-10-30 14:11:33.953823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:33.953830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.789 [2024-10-30 14:11:33.953835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:33.953842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.789 [2024-10-30 14:11:33.953847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:33.953854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:51784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.789 [2024-10-30 14:11:33.953868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:33.953876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.789 [2024-10-30 14:11:33.953881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:33.953889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.789 [2024-10-30 14:11:33.953894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:33.953901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.789 [2024-10-30 14:11:33.953906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.789 [2024-10-30 14:11:33.953912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.953917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.953924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.953929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.953936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.953941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.953948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.953953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.953960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.953965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.953972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.953977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.953983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.953988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.953995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.954000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.954012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.954025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.954038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.954049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.954061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.954073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.954085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.954097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.954110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.954121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.954134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.954145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.954157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.954168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.954182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.954194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.954206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.954217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.954230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.954243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.954255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.954267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.790 [2024-10-30 14:11:33.954279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.790 [2024-10-30 14:11:33.954291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.790 [2024-10-30 14:11:33.954303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.790 [2024-10-30 14:11:33.954314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.790 [2024-10-30 14:11:33.954328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.790 [2024-10-30 14:11:33.954339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.790 [2024-10-30 14:11:33.954352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.790 [2024-10-30 14:11:33.954359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.790 [2024-10-30 14:11:33.954364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:51864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:51896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:51912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:51960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:51984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.791 [2024-10-30 14:11:33.954576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.791 [2024-10-30 14:11:33.954589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.791 [2024-10-30 14:11:33.954602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:52008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:52080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:52088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.791 [2024-10-30 14:11:33.954807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.791 [2024-10-30 14:11:33.954821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.791 [2024-10-30 14:11:33.954832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.791 [2024-10-30 14:11:33.954847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.791 [2024-10-30 14:11:33.954861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.791 [2024-10-30 14:11:33.954876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.791 [2024-10-30 14:11:33.954890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.791 [2024-10-30 14:11:33.954896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:52512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.954901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.954908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.954914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.954920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.954926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.954932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:52536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.954938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.954945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.954950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.954958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.954963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.954969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.954975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.954982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.954987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.954993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:52576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.954998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.955010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.955023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.955034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:52608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.955046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.955058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.955069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.955081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.955093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:52648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.955105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:52656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.955117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.955129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.955142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.955154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.955165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.955176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.955188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:52712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.792 [2024-10-30 14:11:33.955200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.792 [2024-10-30 14:11:33.955222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52720 len:8 PRP1 0x0 PRP2 0x0 00:24:46.792 [2024-10-30 14:11:33.955227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.792 [2024-10-30 14:11:33.955240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.792 [2024-10-30 14:11:33.955245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52728 len:8 PRP1 0x0 PRP2 0x0 00:24:46.792 [2024-10-30 14:11:33.955250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.792 [2024-10-30 14:11:33.955259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.792 [2024-10-30 14:11:33.955263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52736 len:8 PRP1 0x0 PRP2 0x0 00:24:46.792 [2024-10-30 14:11:33.955269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.792 [2024-10-30 14:11:33.955279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.792 [2024-10-30 14:11:33.955284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52744 len:8 PRP1 0x0 PRP2 0x0 00:24:46.792 [2024-10-30 14:11:33.955289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.792 [2024-10-30 14:11:33.955300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.792 [2024-10-30 14:11:33.955304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52752 len:8 PRP1 0x0 PRP2 0x0 00:24:46.792 [2024-10-30 14:11:33.955309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.792 [2024-10-30 14:11:33.955318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.792 [2024-10-30 14:11:33.955323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52120 len:8 PRP1 0x0 PRP2 0x0 00:24:46.792 [2024-10-30 14:11:33.955328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.792 [2024-10-30 14:11:33.955337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.792 [2024-10-30 14:11:33.955344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52128 len:8 PRP1 0x0 PRP2 0x0 00:24:46.792 [2024-10-30 14:11:33.955349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.792 [2024-10-30 14:11:33.955359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.792 [2024-10-30 14:11:33.955364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52136 len:8 PRP1 0x0 PRP2 0x0 00:24:46.792 [2024-10-30 14:11:33.955369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.792 [2024-10-30 14:11:33.955379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.792 [2024-10-30 14:11:33.955383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52144 len:8 PRP1 0x0 PRP2 0x0 00:24:46.792 [2024-10-30 14:11:33.955389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.792 [2024-10-30 14:11:33.955398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.792 [2024-10-30 14:11:33.955403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52152 len:8 PRP1 0x0 PRP2 0x0 00:24:46.792 [2024-10-30 14:11:33.955408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.792 [2024-10-30 14:11:33.955413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.792 [2024-10-30 14:11:33.955417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.793 [2024-10-30 14:11:33.955423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52160 len:8 PRP1 0x0 PRP2 0x0 00:24:46.793 [2024-10-30 14:11:33.955428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:33.955433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.793 [2024-10-30 14:11:33.955437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.793 [2024-10-30 14:11:33.955442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52168 len:8 PRP1 0x0 PRP2 0x0 00:24:46.793 [2024-10-30 14:11:33.955447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:33.955479] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:46.793 [2024-10-30 14:11:33.955498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.793 [2024-10-30 14:11:33.955505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:33.955511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.793 [2024-10-30 14:11:33.955517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:33.955523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.793 [2024-10-30 14:11:33.955528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:33.955534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.793 [2024-10-30 14:11:33.955539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:33.955545] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:46.793 [2024-10-30 14:11:33.957968] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:46.793 [2024-10-30 14:11:33.957989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb25f70 (9): Bad file descriptor 00:24:46.793 [2024-10-30 14:11:33.997485] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:24:46.793 11626.20 IOPS, 45.41 MiB/s [2024-10-30T13:11:45.092Z] 11851.50 IOPS, 46.29 MiB/s [2024-10-30T13:11:45.092Z] 12015.29 IOPS, 46.93 MiB/s [2024-10-30T13:11:45.092Z] 12129.00 IOPS, 47.38 MiB/s [2024-10-30T13:11:45.092Z] [2024-10-30 14:11:38.332491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:123600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:123616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:123640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:123648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:123656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:123672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:123680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:123688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:123696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:123728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:123784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:123824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:123832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.793 [2024-10-30 14:11:38.332890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.793 [2024-10-30 14:11:38.332902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.793 [2024-10-30 14:11:38.332914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.793 [2024-10-30 14:11:38.332920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:123976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.332926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.332933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:123984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.332938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.332944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.332949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.332956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:124000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.332962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.332969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:124008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.332974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.332981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.332986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.332993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.332998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:124032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:124048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:124064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:124080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:124088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:124096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:124104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:124128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:124136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:124144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:124152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:124168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:124176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:124184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:124200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:124224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:124232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:124240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:124256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:124264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:124272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.794 [2024-10-30 14:11:38.333374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:124280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.794 [2024-10-30 14:11:38.333379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:124296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:124304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:124312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:124320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:124344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:124352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:124360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:124368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:124376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:124384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:124400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:124416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:124424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:124440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:124448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:124456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:124464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:124472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:124480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:124488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:124496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:124504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:123840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.795 [2024-10-30 14:11:38.333717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.795 [2024-10-30 14:11:38.333729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:123856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.795 [2024-10-30 14:11:38.333741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:123864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.795 [2024-10-30 14:11:38.333757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.795 [2024-10-30 14:11:38.333768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.795 [2024-10-30 14:11:38.333780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.795 [2024-10-30 14:11:38.333793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:124512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:124520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.795 [2024-10-30 14:11:38.333827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.795 [2024-10-30 14:11:38.333849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124536 len:8 PRP1 0x0 PRP2 0x0 00:24:46.795 [2024-10-30 14:11:38.333855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.795 [2024-10-30 14:11:38.333866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.795 [2024-10-30 14:11:38.333871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124544 len:8 PRP1 0x0 PRP2 0x0 00:24:46.795 [2024-10-30 14:11:38.333876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.795 [2024-10-30 14:11:38.333881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.795 [2024-10-30 14:11:38.333885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.795 [2024-10-30 14:11:38.333890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124552 len:8 PRP1 0x0 PRP2 0x0 00:24:46.796 [2024-10-30 14:11:38.333894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.796 [2024-10-30 14:11:38.333900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.796 [2024-10-30 14:11:38.333904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.796 [2024-10-30 14:11:38.333908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124560 len:8 PRP1 0x0 PRP2 0x0 00:24:46.796 [2024-10-30 14:11:38.333913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.796 [2024-10-30 14:11:38.333918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.796 [2024-10-30 14:11:38.333921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.796 [2024-10-30 14:11:38.333925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124568 len:8 PRP1 0x0 PRP2 0x0 00:24:46.796 [2024-10-30 14:11:38.333930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.796 [2024-10-30 14:11:38.333937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.796 [2024-10-30 14:11:38.333941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.796 [2024-10-30 14:11:38.333945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124576 len:8 PRP1 0x0 PRP2 0x0 00:24:46.796 [2024-10-30 14:11:38.333951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.796 [2024-10-30 14:11:38.333956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.796 [2024-10-30 14:11:38.333960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.796 [2024-10-30 14:11:38.333964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124584 len:8 PRP1 0x0 PRP2 0x0 00:24:46.796 [2024-10-30 14:11:38.333968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.796 [2024-10-30 14:11:38.333974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.796 [2024-10-30 14:11:38.333978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.796 [2024-10-30 14:11:38.333982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124592 len:8 PRP1 0x0 PRP2 0x0 00:24:46.796 [2024-10-30 14:11:38.333988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.796 [2024-10-30 14:11:38.333993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.796 [2024-10-30 14:11:38.333997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.796 [2024-10-30 14:11:38.334001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124600 len:8 PRP1 0x0 PRP2 0x0 00:24:46.796 [2024-10-30 14:11:38.334007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.796 [2024-10-30 14:11:38.334013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.796 [2024-10-30 14:11:38.334016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.796 [2024-10-30 14:11:38.334020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124608 len:8 PRP1 0x0 PRP2 0x0 00:24:46.796 [2024-10-30 14:11:38.334025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.796 [2024-10-30 14:11:38.334030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.796 [2024-10-30 14:11:38.334035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.796 [2024-10-30 14:11:38.334040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124616 len:8 PRP1 0x0 PRP2 0x0 00:24:46.796 [2024-10-30 14:11:38.334044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.796 [2024-10-30 14:11:38.334050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.796 [2024-10-30 14:11:38.334054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.796 [2024-10-30 14:11:38.334058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123896 len:8 PRP1 0x0 PRP2 0x0 00:24:46.796 [2024-10-30 14:11:38.334062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.796 [2024-10-30 14:11:38.334067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.796 [2024-10-30 14:11:38.334071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.796 [2024-10-30 14:11:38.334075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123904 len:8 PRP1 0x0 PRP2 0x0 00:24:46.796 [2024-10-30 14:11:38.334081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.796 [2024-10-30 14:11:38.334088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.796 [2024-10-30 14:11:38.334092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.796 [2024-10-30 14:11:38.334096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123912 len:8 PRP1 0x0 PRP2 0x0 00:24:46.796 [2024-10-30 14:11:38.334100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.796 [2024-10-30 14:11:38.334106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.796 [2024-10-30 14:11:38.334109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.796 [2024-10-30 14:11:38.334116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123920 len:8 PRP1 0x0 PRP2 0x0 00:24:46.796 [2024-10-30 14:11:38.334120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.796 [2024-10-30 14:11:38.334126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.796 [2024-10-30 14:11:38.334129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.796 [2024-10-30 14:11:38.345734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123928 len:8 PRP1 0x0 PRP2 0x0 00:24:46.796 [2024-10-30 14:11:38.345767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.796 [2024-10-30 14:11:38.345779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.796 [2024-10-30 14:11:38.345784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.796 [2024-10-30 14:11:38.345790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123936 len:8 PRP1 0x0 PRP2 0x0 00:24:46.796 [2024-10-30 14:11:38.345796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.796 [2024-10-30 14:11:38.345802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.796 [2024-10-30 14:11:38.345806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.796 [2024-10-30 14:11:38.345810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123944 len:8 PRP1 0x0 PRP2 0x0 00:24:46.796 [2024-10-30 14:11:38.345816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.796 [2024-10-30 14:11:38.345821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.796 [2024-10-30 14:11:38.345826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.796 [2024-10-30 14:11:38.345831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123952 len:8 PRP1 0x0 PRP2 0x0 00:24:46.796 [2024-10-30 14:11:38.345836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.796 [2024-10-30 14:11:38.345876] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:46.796 [2024-10-30 14:11:38.345902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.796 [2024-10-30 14:11:38.345909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.796 [2024-10-30 14:11:38.345917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.796 [2024-10-30 14:11:38.345923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.796 [2024-10-30 14:11:38.345934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.796 [2024-10-30 14:11:38.345939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.796 [2024-10-30 14:11:38.345947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.796 [2024-10-30 14:11:38.345952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.796 [2024-10-30 14:11:38.345958] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:46.796 [2024-10-30 14:11:38.345994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb25f70 (9): Bad file descriptor 00:24:46.796 [2024-10-30 14:11:38.348627] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:46.796 12105.33 IOPS, 47.29 MiB/s [2024-10-30T13:11:45.095Z] [2024-10-30 14:11:38.416421] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:24:46.796 12199.00 IOPS, 47.65 MiB/s [2024-10-30T13:11:45.095Z] 12256.09 IOPS, 47.88 MiB/s [2024-10-30T13:11:45.095Z] 12313.17 IOPS, 48.10 MiB/s [2024-10-30T13:11:45.095Z] 12354.00 IOPS, 48.26 MiB/s [2024-10-30T13:11:45.095Z] 12389.57 IOPS, 48.40 MiB/s 00:24:46.796 Latency(us) 00:24:46.796 [2024-10-30T13:11:45.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.796 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:46.796 Verification LBA range: start 0x0 length 0x4000 00:24:46.796 NVMe0n1 : 15.00 12423.99 48.53 543.57 0.00 9848.93 539.31 29272.75 00:24:46.796 [2024-10-30T13:11:45.095Z] =================================================================================================================== 00:24:46.796 [2024-10-30T13:11:45.095Z] Total : 12423.99 48.53 543.57 0.00 9848.93 539.31 29272.75 00:24:46.796 Received shutdown signal, test time was about 15.000000 seconds 00:24:46.796 00:24:46.796 Latency(us) 00:24:46.796 [2024-10-30T13:11:45.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.796 [2024-10-30T13:11:45.095Z] =================================================================================================================== 00:24:46.796 [2024-10-30T13:11:45.095Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:46.796 14:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:46.796 14:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:46.796 14:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:46.796 14:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1136537 00:24:46.796 14:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1136537 /var/tmp/bdevperf.sock 00:24:46.796 14:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:46.796 14:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1136537 ']' 00:24:46.797 14:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:46.797 14:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:46.797 14:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:46.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:46.797 14:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:46.797 14:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:47.370 14:11:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:47.370 14:11:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:47.370 14:11:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:47.370 [2024-10-30 14:11:45.627102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:47.370 14:11:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:47.631 [2024-10-30 14:11:45.811563] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:47.631 14:11:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:48.203 NVMe0n1 00:24:48.203 14:11:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:48.203 00:24:48.203 14:11:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:48.463 00:24:48.463 14:11:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:48.463 14:11:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:48.724 14:11:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:48.985 14:11:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:52.289 14:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:52.289 14:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:52.289 14:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1137629 00:24:52.289 14:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:52.289 14:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1137629 00:24:53.232 { 00:24:53.232 "results": [ 00:24:53.232 { 00:24:53.232 "job": "NVMe0n1", 00:24:53.232 "core_mask": "0x1", 00:24:53.232 "workload": "verify", 00:24:53.232 "status": "finished", 00:24:53.232 "verify_range": { 00:24:53.232 "start": 0, 00:24:53.232 "length": 16384 00:24:53.232 }, 00:24:53.232 "queue_depth": 128, 00:24:53.232 "io_size": 4096, 00:24:53.232 "runtime": 1.00405, 00:24:53.232 "iops": 12685.623225934964, 00:24:53.232 "mibps": 49.55321572630845, 00:24:53.232 "io_failed": 0, 00:24:53.232 "io_timeout": 0, 00:24:53.232 "avg_latency_us": 10043.955225458638, 00:24:53.232 "min_latency_us": 836.2666666666667, 00:24:53.232 "max_latency_us": 14527.146666666667 00:24:53.232 } 00:24:53.232 ], 00:24:53.232 "core_count": 1 00:24:53.232 } 00:24:53.232 14:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:53.232 [2024-10-30 14:11:44.690577] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:24:53.232 [2024-10-30 14:11:44.690636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1136537 ] 00:24:53.232 [2024-10-30 14:11:44.772878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.232 [2024-10-30 14:11:44.801147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.232 [2024-10-30 14:11:47.065914] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:53.232 [2024-10-30 14:11:47.065953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.232 [2024-10-30 14:11:47.065962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.232 [2024-10-30 14:11:47.065969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.232 [2024-10-30 14:11:47.065975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.232 [2024-10-30 14:11:47.065981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.232 [2024-10-30 14:11:47.065986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.232 [2024-10-30 14:11:47.065991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.232 [2024-10-30 14:11:47.065996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.232 [2024-10-30 14:11:47.066002] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:53.232 [2024-10-30 14:11:47.066025] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:53.232 [2024-10-30 14:11:47.066036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64bf70 (9): Bad file descriptor 00:24:53.232 [2024-10-30 14:11:47.118810] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:53.232 Running I/O for 1 seconds... 00:24:53.232 12609.00 IOPS, 49.25 MiB/s 00:24:53.232 Latency(us) 00:24:53.232 [2024-10-30T13:11:51.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.232 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:53.232 Verification LBA range: start 0x0 length 0x4000 00:24:53.232 NVMe0n1 : 1.00 12685.62 49.55 0.00 0.00 10043.96 836.27 14527.15 00:24:53.232 [2024-10-30T13:11:51.531Z] =================================================================================================================== 00:24:53.232 [2024-10-30T13:11:51.531Z] Total : 12685.62 49.55 0.00 0.00 10043.96 836.27 14527.15 00:24:53.232 14:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:53.232 14:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:53.493 14:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:53.493 14:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:53.493 14:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:53.753 14:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:54.013 14:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:57.314 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:57.314 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:57.314 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1136537 00:24:57.314 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1136537 ']' 00:24:57.314 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1136537 00:24:57.314 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:57.314 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:57.314 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1136537 00:24:57.314 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:57.314 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:57.314 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1136537' 00:24:57.314 killing process with pid 1136537 00:24:57.314 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1136537 00:24:57.314 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1136537 00:24:57.314 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:57.314 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:57.576 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:57.576 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:57.576 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:57.576 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:57.576 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:57.576 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:57.576 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:57.576 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:57.576 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:57.576 rmmod nvme_tcp 00:24:57.576 rmmod nvme_fabrics 00:24:57.576 rmmod nvme_keyring 00:24:57.576 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:57.576 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:57.576 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:57.576 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1132872 ']' 00:24:57.576 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1132872 00:24:57.576 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1132872 ']' 00:24:57.576 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1132872 00:24:57.576 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:57.576 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:57.576 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1132872 00:24:57.576 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:57.576 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:57.576 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1132872' 00:24:57.576 killing process with pid 1132872 00:24:57.576 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1132872 00:24:57.576 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1132872 00:24:57.838 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:57.838 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:57.838 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:57.838 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:57.838 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:57.838 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:57.838 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:57.838 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:57.838 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:57.838 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.838 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.838 14:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.754 14:11:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:59.754 00:24:59.754 real 0m40.134s 00:24:59.754 user 2m3.617s 00:24:59.754 sys 0m8.512s 00:24:59.754 14:11:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:59.754 14:11:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:59.754 ************************************ 00:24:59.754 END TEST nvmf_failover 00:24:59.754 ************************************ 00:24:59.754 14:11:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:59.754 14:11:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:59.754 14:11:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:59.754 14:11:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.016 ************************************ 00:25:00.016 START TEST nvmf_host_discovery 00:25:00.016 ************************************ 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:00.016 * Looking for test storage... 00:25:00.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:00.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.016 --rc genhtml_branch_coverage=1 00:25:00.016 --rc genhtml_function_coverage=1 00:25:00.016 --rc genhtml_legend=1 00:25:00.016 --rc geninfo_all_blocks=1 00:25:00.016 --rc geninfo_unexecuted_blocks=1 00:25:00.016 00:25:00.016 ' 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:00.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.016 --rc genhtml_branch_coverage=1 00:25:00.016 --rc genhtml_function_coverage=1 00:25:00.016 --rc genhtml_legend=1 00:25:00.016 --rc geninfo_all_blocks=1 00:25:00.016 --rc geninfo_unexecuted_blocks=1 00:25:00.016 00:25:00.016 ' 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:00.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.016 --rc genhtml_branch_coverage=1 00:25:00.016 --rc genhtml_function_coverage=1 00:25:00.016 --rc genhtml_legend=1 00:25:00.016 --rc geninfo_all_blocks=1 00:25:00.016 --rc geninfo_unexecuted_blocks=1 00:25:00.016 00:25:00.016 ' 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:00.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.016 --rc genhtml_branch_coverage=1 00:25:00.016 --rc genhtml_function_coverage=1 00:25:00.016 --rc genhtml_legend=1 00:25:00.016 --rc geninfo_all_blocks=1 00:25:00.016 --rc geninfo_unexecuted_blocks=1 00:25:00.016 00:25:00.016 ' 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:00.016 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:00.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:00.278 14:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:08.421 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:08.421 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.421 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:08.422 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:08.422 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:08.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:25:08.422 00:25:08.422 --- 10.0.0.2 ping statistics --- 00:25:08.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.422 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:25:08.422 00:25:08.422 --- 10.0.0.1 ping statistics --- 00:25:08.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.422 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1142967 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1142967 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1142967 ']' 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.422 14:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.422 [2024-10-30 14:12:05.946161] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:25:08.422 [2024-10-30 14:12:05.946228] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.422 [2024-10-30 14:12:06.045882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.422 [2024-10-30 14:12:06.096233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.422 [2024-10-30 14:12:06.096287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.422 [2024-10-30 14:12:06.096295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.422 [2024-10-30 14:12:06.096302] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.422 [2024-10-30 14:12:06.096309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.422 [2024-10-30 14:12:06.097064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.685 [2024-10-30 14:12:06.808077] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.685 [2024-10-30 14:12:06.820342] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.685 null0 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.685 null1 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1143099 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1143099 /tmp/host.sock 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1143099 ']' 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:08.685 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.685 14:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.685 [2024-10-30 14:12:06.917589] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:25:08.685 [2024-10-30 14:12:06.917653] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1143099 ] 00:25:08.947 [2024-10-30 14:12:07.010066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.948 [2024-10-30 14:12:07.063314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:09.521 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:09.784 14:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:09.784 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.784 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:09.784 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:09.784 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.784 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.784 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:09.784 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.784 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:09.784 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:09.784 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.046 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.047 [2024-10-30 14:12:08.091514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:10.047 14:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:10.620 [2024-10-30 14:12:08.806725] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:10.620 [2024-10-30 14:12:08.806747] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:10.620 [2024-10-30 14:12:08.806761] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:10.882 [2024-10-30 14:12:08.935188] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:10.882 [2024-10-30 14:12:08.994944] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:10.882 [2024-10-30 14:12:08.995887] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1f86930:1 started. 00:25:10.882 [2024-10-30 14:12:08.997521] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:10.882 [2024-10-30 14:12:08.997539] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:10.882 [2024-10-30 14:12:09.005461] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1f86930 was disconnected and freed. delete nvme_qpair. 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:11.143 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.405 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:11.667 [2024-10-30 14:12:09.754715] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1f86ce0:1 started. 00:25:11.667 [2024-10-30 14:12:09.757672] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1f86ce0 was disconnected and freed. delete nvme_qpair. 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.667 [2024-10-30 14:12:09.844018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:11.667 [2024-10-30 14:12:09.844343] bdev_nvme.c:7273:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:11.667 [2024-10-30 14:12:09.844364] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.667 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:11.927 [2024-10-30 14:12:09.972195] bdev_nvme.c:7215:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:11.927 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.927 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:11.927 14:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:11.927 [2024-10-30 14:12:10.032282] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:11.927 [2024-10-30 14:12:10.032323] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:11.927 [2024-10-30 14:12:10.032332] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:11.927 [2024-10-30 14:12:10.032337] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:12.877 14:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:12.877 14:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:12.877 14:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:12.877 14:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:12.877 14:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:12.877 14:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.877 14:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.877 14:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:12.877 14:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:12.877 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.878 [2024-10-30 14:12:11.100106] bdev_nvme.c:7273:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:12.878 [2024-10-30 14:12:11.100124] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:12.878 [2024-10-30 14:12:11.109302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.878 [2024-10-30 14:12:11.109317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.878 [2024-10-30 14:12:11.109324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.878 [2024-10-30 14:12:11.109330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.878 [2024-10-30 14:12:11.109336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.878 [2024-10-30 14:12:11.109341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.878 [2024-10-30 14:12:11.109347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.878 [2024-10-30 14:12:11.109352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.878 [2024-10-30 14:12:11.109358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f57010 is same with the state(6) to be set 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:12.878 [2024-10-30 14:12:11.119315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f57010 (9): Bad file descriptor 00:25:12.878 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.878 [2024-10-30 14:12:11.129350] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:12.878 [2024-10-30 14:12:11.129358] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:12.878 [2024-10-30 14:12:11.129363] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:12.878 [2024-10-30 14:12:11.129367] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:12.878 [2024-10-30 14:12:11.129382] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:12.878 [2024-10-30 14:12:11.129686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.879 [2024-10-30 14:12:11.129698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f57010 with addr=10.0.0.2, port=4420 00:25:12.879 [2024-10-30 14:12:11.129704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f57010 is same with the state(6) to be set 00:25:12.879 [2024-10-30 14:12:11.129714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f57010 (9): Bad file descriptor 00:25:12.879 [2024-10-30 14:12:11.129726] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:12.879 [2024-10-30 14:12:11.129731] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:12.879 [2024-10-30 14:12:11.129738] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:12.879 [2024-10-30 14:12:11.129743] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:12.879 [2024-10-30 14:12:11.129752] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:12.879 [2024-10-30 14:12:11.129759] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:12.879 [2024-10-30 14:12:11.139410] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:12.879 [2024-10-30 14:12:11.139418] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:12.879 [2024-10-30 14:12:11.139422] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:12.879 [2024-10-30 14:12:11.139425] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:12.879 [2024-10-30 14:12:11.139436] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:12.879 [2024-10-30 14:12:11.139785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.879 [2024-10-30 14:12:11.139804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f57010 with addr=10.0.0.2, port=4420 00:25:12.879 [2024-10-30 14:12:11.139810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f57010 is same with the state(6) to be set 00:25:12.879 [2024-10-30 14:12:11.139821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f57010 (9): Bad file descriptor 00:25:12.879 [2024-10-30 14:12:11.139828] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:12.879 [2024-10-30 14:12:11.139833] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:12.879 [2024-10-30 14:12:11.139838] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:12.879 [2024-10-30 14:12:11.139843] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:12.879 [2024-10-30 14:12:11.139847] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:12.879 [2024-10-30 14:12:11.139854] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:12.879 [2024-10-30 14:12:11.149465] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:12.879 [2024-10-30 14:12:11.149475] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:12.879 [2024-10-30 14:12:11.149478] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:12.879 [2024-10-30 14:12:11.149481] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:12.879 [2024-10-30 14:12:11.149493] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:12.879 [2024-10-30 14:12:11.149934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.879 [2024-10-30 14:12:11.149966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f57010 with addr=10.0.0.2, port=4420 00:25:12.879 [2024-10-30 14:12:11.149974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f57010 is same with the state(6) to be set 00:25:12.879 [2024-10-30 14:12:11.149989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f57010 (9): Bad file descriptor 00:25:12.879 [2024-10-30 14:12:11.150002] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:12.879 [2024-10-30 14:12:11.150008] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:12.879 [2024-10-30 14:12:11.150014] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:12.879 [2024-10-30 14:12:11.150019] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:12.879 [2024-10-30 14:12:11.150023] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:12.879 [2024-10-30 14:12:11.150032] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:12.879 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.879 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:12.879 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:12.881 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:12.881 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:12.881 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:12.881 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:12.881 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:12.881 [2024-10-30 14:12:11.159522] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:12.881 [2024-10-30 14:12:11.159532] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:12.881 [2024-10-30 14:12:11.159536] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:12.881 [2024-10-30 14:12:11.159539] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:12.881 [2024-10-30 14:12:11.159551] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:12.881 [2024-10-30 14:12:11.159985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.881 [2024-10-30 14:12:11.160017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f57010 with addr=10.0.0.2, port=4420 00:25:12.881 [2024-10-30 14:12:11.160026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f57010 is same with the state(6) to be set 00:25:12.881 [2024-10-30 14:12:11.160040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f57010 (9): Bad file descriptor 00:25:12.881 [2024-10-30 14:12:11.160059] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:12.881 [2024-10-30 14:12:11.160064] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:12.881 [2024-10-30 14:12:11.160070] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:12.881 [2024-10-30 14:12:11.160075] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:12.881 [2024-10-30 14:12:11.160079] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:12.881 [2024-10-30 14:12:11.160088] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:12.881 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:12.881 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:12.881 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.881 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:12.881 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.881 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:12.881 [2024-10-30 14:12:11.169582] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:12.881 [2024-10-30 14:12:11.169592] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:12.881 [2024-10-30 14:12:11.169596] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:12.881 [2024-10-30 14:12:11.169599] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:12.881 [2024-10-30 14:12:11.169612] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:12.881 [2024-10-30 14:12:11.169941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.881 [2024-10-30 14:12:11.169973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f57010 with addr=10.0.0.2, port=4420 00:25:12.881 [2024-10-30 14:12:11.169982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f57010 is same with the state(6) to be set 00:25:12.881 [2024-10-30 14:12:11.169997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f57010 (9): Bad file descriptor 00:25:12.881 [2024-10-30 14:12:11.170016] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:12.881 [2024-10-30 14:12:11.170021] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:12.881 [2024-10-30 14:12:11.170027] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:12.882 [2024-10-30 14:12:11.170033] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:12.882 [2024-10-30 14:12:11.170036] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:12.882 [2024-10-30 14:12:11.170045] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:13.147 [2024-10-30 14:12:11.179642] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:13.147 [2024-10-30 14:12:11.179652] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:13.147 [2024-10-30 14:12:11.179656] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:13.147 [2024-10-30 14:12:11.179660] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:13.147 [2024-10-30 14:12:11.179672] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:13.147 [2024-10-30 14:12:11.179974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-10-30 14:12:11.179986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f57010 with addr=10.0.0.2, port=4420 00:25:13.147 [2024-10-30 14:12:11.179991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f57010 is same with the state(6) to be set 00:25:13.147 [2024-10-30 14:12:11.179999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f57010 (9): Bad file descriptor 00:25:13.147 [2024-10-30 14:12:11.180007] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:13.147 [2024-10-30 14:12:11.180012] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:13.147 [2024-10-30 14:12:11.180021] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:13.147 [2024-10-30 14:12:11.180026] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:13.147 [2024-10-30 14:12:11.180029] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:13.147 [2024-10-30 14:12:11.180036] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:13.147 [2024-10-30 14:12:11.187075] bdev_nvme.c:7078:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:13.147 [2024-10-30 14:12:11.187089] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:13.147 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.148 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.408 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.408 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:13.408 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:13.408 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:13.408 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:13.408 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:13.408 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.408 14:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.350 [2024-10-30 14:12:12.536917] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:14.350 [2024-10-30 14:12:12.536930] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:14.350 [2024-10-30 14:12:12.536940] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:14.351 [2024-10-30 14:12:12.626202] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:14.922 [2024-10-30 14:12:12.934606] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:14.922 [2024-10-30 14:12:12.935288] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1f8b530:1 started. 00:25:14.922 [2024-10-30 14:12:12.936706] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:14.922 [2024-10-30 14:12:12.936729] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:14.922 14:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.922 14:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:14.922 14:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:14.922 14:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:14.922 14:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:14.922 14:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.922 14:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:14.922 14:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.922 14:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:14.922 14:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.922 14:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.922 [2024-10-30 14:12:12.945416] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1f8b530 was disconnected and freed. delete nvme_qpair. 00:25:14.922 request: 00:25:14.922 { 00:25:14.922 "name": "nvme", 00:25:14.922 "trtype": "tcp", 00:25:14.922 "traddr": "10.0.0.2", 00:25:14.922 "adrfam": "ipv4", 00:25:14.922 "trsvcid": "8009", 00:25:14.922 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:14.922 "wait_for_attach": true, 00:25:14.922 "method": "bdev_nvme_start_discovery", 00:25:14.922 "req_id": 1 00:25:14.922 } 00:25:14.922 Got JSON-RPC error response 00:25:14.922 response: 00:25:14.922 { 00:25:14.922 "code": -17, 00:25:14.922 "message": "File exists" 00:25:14.922 } 00:25:14.923 14:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:14.923 14:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:14.923 14:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:14.923 14:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:14.923 14:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:14.923 14:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:14.923 14:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:14.923 14:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:14.923 14:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.923 14:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:14.923 14:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.923 14:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:14.923 14:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.923 request: 00:25:14.923 { 00:25:14.923 "name": "nvme_second", 00:25:14.923 "trtype": "tcp", 00:25:14.923 "traddr": "10.0.0.2", 00:25:14.923 "adrfam": "ipv4", 00:25:14.923 "trsvcid": "8009", 00:25:14.923 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:14.923 "wait_for_attach": true, 00:25:14.923 "method": "bdev_nvme_start_discovery", 00:25:14.923 "req_id": 1 00:25:14.923 } 00:25:14.923 Got JSON-RPC error response 00:25:14.923 response: 00:25:14.923 { 00:25:14.923 "code": -17, 00:25:14.923 "message": "File exists" 00:25:14.923 } 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.923 14:12:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.306 [2024-10-30 14:12:14.192286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.306 [2024-10-30 14:12:14.192309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6f450 with addr=10.0.0.2, port=8010 00:25:16.306 [2024-10-30 14:12:14.192320] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:16.306 [2024-10-30 14:12:14.192325] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:16.306 [2024-10-30 14:12:14.192331] bdev_nvme.c:7359:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:17.247 [2024-10-30 14:12:15.194663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.247 [2024-10-30 14:12:15.194681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6f450 with addr=10.0.0.2, port=8010 00:25:17.247 [2024-10-30 14:12:15.194690] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:17.247 [2024-10-30 14:12:15.194695] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:17.247 [2024-10-30 14:12:15.194700] bdev_nvme.c:7359:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:18.190 [2024-10-30 14:12:16.196670] bdev_nvme.c:7334:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:18.190 request: 00:25:18.190 { 00:25:18.190 "name": "nvme_second", 00:25:18.190 "trtype": "tcp", 00:25:18.190 "traddr": "10.0.0.2", 00:25:18.190 "adrfam": "ipv4", 00:25:18.190 "trsvcid": "8010", 00:25:18.190 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:18.190 "wait_for_attach": false, 00:25:18.190 "attach_timeout_ms": 3000, 00:25:18.190 "method": "bdev_nvme_start_discovery", 00:25:18.190 "req_id": 1 00:25:18.190 } 00:25:18.190 Got JSON-RPC error response 00:25:18.190 response: 00:25:18.190 { 00:25:18.190 "code": -110, 00:25:18.190 "message": "Connection timed out" 00:25:18.190 } 00:25:18.190 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:18.190 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:18.190 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:18.190 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:18.190 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:18.190 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:18.190 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:18.190 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:18.190 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.190 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:18.190 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.190 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:18.190 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.190 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:18.190 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:18.190 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1143099 00:25:18.190 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:18.190 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:18.190 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:18.190 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:18.190 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:18.190 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:18.191 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:18.191 rmmod nvme_tcp 00:25:18.191 rmmod nvme_fabrics 00:25:18.191 rmmod nvme_keyring 00:25:18.191 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:18.191 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:18.191 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:18.191 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1142967 ']' 00:25:18.191 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1142967 00:25:18.191 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1142967 ']' 00:25:18.191 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1142967 00:25:18.191 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:25:18.191 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:18.191 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1142967 00:25:18.191 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:18.191 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:18.191 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1142967' 00:25:18.191 killing process with pid 1142967 00:25:18.191 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1142967 00:25:18.191 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1142967 00:25:18.452 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:18.452 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:18.452 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:18.452 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:18.452 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:18.452 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:18.452 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:18.452 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:18.452 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:18.452 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.452 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.452 14:12:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.365 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:20.365 00:25:20.365 real 0m20.483s 00:25:20.365 user 0m23.823s 00:25:20.365 sys 0m7.204s 00:25:20.365 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:20.365 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.365 ************************************ 00:25:20.365 END TEST nvmf_host_discovery 00:25:20.365 ************************************ 00:25:20.365 14:12:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:20.365 14:12:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:20.365 14:12:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:20.365 14:12:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.365 ************************************ 00:25:20.365 START TEST nvmf_host_multipath_status 00:25:20.365 ************************************ 00:25:20.365 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:20.627 * Looking for test storage... 00:25:20.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:20.627 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:20.627 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:25:20.627 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:20.627 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:20.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.628 --rc genhtml_branch_coverage=1 00:25:20.628 --rc genhtml_function_coverage=1 00:25:20.628 --rc genhtml_legend=1 00:25:20.628 --rc geninfo_all_blocks=1 00:25:20.628 --rc geninfo_unexecuted_blocks=1 00:25:20.628 00:25:20.628 ' 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:20.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.628 --rc genhtml_branch_coverage=1 00:25:20.628 --rc genhtml_function_coverage=1 00:25:20.628 --rc genhtml_legend=1 00:25:20.628 --rc geninfo_all_blocks=1 00:25:20.628 --rc geninfo_unexecuted_blocks=1 00:25:20.628 00:25:20.628 ' 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:20.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.628 --rc genhtml_branch_coverage=1 00:25:20.628 --rc genhtml_function_coverage=1 00:25:20.628 --rc genhtml_legend=1 00:25:20.628 --rc geninfo_all_blocks=1 00:25:20.628 --rc geninfo_unexecuted_blocks=1 00:25:20.628 00:25:20.628 ' 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:20.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.628 --rc genhtml_branch_coverage=1 00:25:20.628 --rc genhtml_function_coverage=1 00:25:20.628 --rc genhtml_legend=1 00:25:20.628 --rc geninfo_all_blocks=1 00:25:20.628 --rc geninfo_unexecuted_blocks=1 00:25:20.628 00:25:20.628 ' 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.628 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:20.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:20.629 14:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:28.775 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:28.775 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:28.775 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:28.775 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:28.776 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:28.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:25:28.776 00:25:28.776 --- 10.0.0.2 ping statistics --- 00:25:28.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.776 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:28.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:25:28.776 00:25:28.776 --- 10.0.0.1 ping statistics --- 00:25:28.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.776 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1149740 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1149740 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1149740 ']' 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:28.776 14:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:28.776 [2024-10-30 14:12:26.477111] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:25:28.776 [2024-10-30 14:12:26.477180] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.776 [2024-10-30 14:12:26.576890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:28.776 [2024-10-30 14:12:26.628625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.776 [2024-10-30 14:12:26.628682] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.776 [2024-10-30 14:12:26.628691] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:28.776 [2024-10-30 14:12:26.628698] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:28.776 [2024-10-30 14:12:26.628705] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.776 [2024-10-30 14:12:26.630369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.776 [2024-10-30 14:12:26.630372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.039 14:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:29.039 14:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:29.039 14:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:29.039 14:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:29.039 14:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:29.301 14:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.301 14:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1149740 00:25:29.301 14:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:29.301 [2024-10-30 14:12:27.513950] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:29.301 14:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:29.562 Malloc0 00:25:29.562 14:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:29.824 14:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:30.085 14:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:30.085 [2024-10-30 14:12:28.316103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.085 14:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:30.346 [2024-10-30 14:12:28.500547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:30.346 14:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1150106 00:25:30.346 14:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:30.346 14:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:30.346 14:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1150106 /var/tmp/bdevperf.sock 00:25:30.346 14:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1150106 ']' 00:25:30.346 14:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:30.346 14:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:30.346 14:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:30.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:30.346 14:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:30.346 14:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:31.286 14:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:31.286 14:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:31.286 14:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:31.286 14:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:31.854 Nvme0n1 00:25:31.854 14:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:32.111 Nvme0n1 00:25:32.112 14:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:32.112 14:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:34.656 14:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:34.656 14:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:34.656 14:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:34.656 14:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:35.597 14:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:35.597 14:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:35.597 14:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.597 14:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:35.858 14:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.858 14:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:35.858 14:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.858 14:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:35.858 14:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:35.858 14:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:35.858 14:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.858 14:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:36.119 14:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.119 14:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:36.119 14:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.119 14:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:36.381 14:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.381 14:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:36.381 14:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.381 14:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:36.381 14:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.381 14:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:36.381 14:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.381 14:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:36.642 14:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.642 14:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:36.642 14:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:36.902 14:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:36.902 14:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:38.289 14:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:38.289 14:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:38.290 14:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.290 14:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:38.290 14:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:38.290 14:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:38.290 14:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.290 14:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:38.290 14:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.290 14:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:38.290 14:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.290 14:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:38.550 14:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.551 14:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:38.551 14:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.551 14:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:38.811 14:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.811 14:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:38.811 14:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.811 14:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:38.811 14:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.811 14:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:39.073 14:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.073 14:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:39.073 14:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.073 14:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:39.073 14:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:39.335 14:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:39.595 14:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:40.537 14:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:40.537 14:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:40.537 14:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.537 14:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:40.797 14:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.797 14:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:40.797 14:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.797 14:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:40.797 14:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:40.797 14:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:40.797 14:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.797 14:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:41.058 14:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.058 14:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:41.058 14:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.058 14:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:41.318 14:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.318 14:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:41.318 14:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.318 14:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:41.318 14:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.318 14:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:41.318 14:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.319 14:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:41.578 14:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.578 14:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:41.578 14:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:41.578 14:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:41.836 14:12:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:42.776 14:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:42.776 14:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:42.777 14:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.777 14:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:43.039 14:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.039 14:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:43.039 14:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:43.039 14:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.300 14:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:43.300 14:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:43.300 14:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:43.300 14:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.561 14:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.561 14:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:43.561 14:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.561 14:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:43.561 14:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.561 14:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:43.561 14:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.561 14:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:43.821 14:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.821 14:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:43.822 14:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.822 14:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:44.082 14:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:44.082 14:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:44.082 14:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:44.082 14:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:44.343 14:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:45.288 14:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:45.288 14:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:45.288 14:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.288 14:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:45.548 14:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:45.548 14:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:45.548 14:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.548 14:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:45.809 14:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:45.809 14:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:45.809 14:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.809 14:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:45.809 14:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.809 14:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:45.809 14:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:45.809 14:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.070 14:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.070 14:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:46.070 14:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.070 14:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:46.330 14:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:46.330 14:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:46.330 14:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.330 14:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:46.592 14:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:46.592 14:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:46.592 14:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:46.592 14:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:46.853 14:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:47.793 14:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:47.793 14:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:47.793 14:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.793 14:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:48.055 14:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:48.055 14:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:48.055 14:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.055 14:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:48.317 14:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.317 14:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:48.317 14:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.317 14:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:48.317 14:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.317 14:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:48.317 14:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:48.317 14:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.577 14:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.577 14:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:48.577 14:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.577 14:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:48.838 14:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:48.838 14:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:48.838 14:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.838 14:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:48.838 14:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.838 14:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:49.099 14:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:49.099 14:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:49.361 14:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:49.361 14:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:50.745 14:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:50.745 14:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:50.745 14:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.745 14:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:50.745 14:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.745 14:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:50.745 14:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.745 14:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:50.745 14:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.745 14:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:50.745 14:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:50.745 14:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.007 14:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.007 14:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:51.007 14:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.007 14:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:51.269 14:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.269 14:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:51.269 14:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.269 14:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:51.533 14:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.533 14:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:51.533 14:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.533 14:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:51.533 14:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.533 14:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:51.533 14:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:51.792 14:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:52.052 14:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:52.995 14:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:52.995 14:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:52.995 14:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.995 14:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:53.259 14:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:53.259 14:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:53.259 14:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.259 14:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:53.259 14:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.259 14:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:53.259 14:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.259 14:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:53.518 14:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.518 14:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:53.519 14:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.519 14:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:53.778 14:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.778 14:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:53.778 14:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.778 14:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:53.778 14:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.778 14:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:53.778 14:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.778 14:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:54.086 14:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.086 14:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:54.086 14:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:54.376 14:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:54.377 14:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:55.361 14:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:55.361 14:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:55.361 14:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.361 14:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:55.621 14:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.621 14:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:55.621 14:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.622 14:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:55.883 14:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.883 14:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:55.883 14:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.883 14:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:55.883 14:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.883 14:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:55.883 14:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.883 14:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:56.143 14:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.143 14:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:56.143 14:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.143 14:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:56.402 14:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.402 14:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:56.402 14:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.402 14:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:56.661 14:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.661 14:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:56.661 14:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:56.661 14:12:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:56.920 14:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:57.857 14:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:57.857 14:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:57.857 14:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.857 14:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:58.117 14:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.118 14:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:58.118 14:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.118 14:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:58.377 14:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:58.377 14:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:58.377 14:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.377 14:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:58.377 14:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.377 14:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:58.377 14:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.377 14:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:58.638 14:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.638 14:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:58.638 14:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.638 14:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:58.638 14:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.638 14:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:58.638 14:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.638 14:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:58.898 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:58.898 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1150106 00:25:58.898 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1150106 ']' 00:25:58.898 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1150106 00:25:58.898 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:58.898 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:58.898 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1150106 00:25:58.898 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:58.898 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:58.898 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1150106' 00:25:58.898 killing process with pid 1150106 00:25:58.898 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1150106 00:25:58.898 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1150106 00:25:58.898 { 00:25:58.898 "results": [ 00:25:58.898 { 00:25:58.898 "job": "Nvme0n1", 00:25:58.898 "core_mask": "0x4", 00:25:58.898 "workload": "verify", 00:25:58.898 "status": "terminated", 00:25:58.898 "verify_range": { 00:25:58.898 "start": 0, 00:25:58.898 "length": 16384 00:25:58.898 }, 00:25:58.898 "queue_depth": 128, 00:25:58.898 "io_size": 4096, 00:25:58.898 "runtime": 26.629888, 00:25:58.898 "iops": 12015.371600511426, 00:25:58.898 "mibps": 46.935045314497756, 00:25:58.899 "io_failed": 0, 00:25:58.899 "io_timeout": 0, 00:25:58.899 "avg_latency_us": 10634.535666900023, 00:25:58.899 "min_latency_us": 583.68, 00:25:58.899 "max_latency_us": 3019898.88 00:25:58.899 } 00:25:58.899 ], 00:25:58.899 "core_count": 1 00:25:58.899 } 00:25:59.162 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1150106 00:25:59.162 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:59.162 [2024-10-30 14:12:28.588516] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:25:59.162 [2024-10-30 14:12:28.588591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1150106 ] 00:25:59.162 [2024-10-30 14:12:28.682463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.162 [2024-10-30 14:12:28.734595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:59.162 Running I/O for 90 seconds... 00:25:59.162 11181.00 IOPS, 43.68 MiB/s [2024-10-30T13:12:57.461Z] 11283.50 IOPS, 44.08 MiB/s [2024-10-30T13:12:57.461Z] 11349.33 IOPS, 44.33 MiB/s [2024-10-30T13:12:57.461Z] 11750.75 IOPS, 45.90 MiB/s [2024-10-30T13:12:57.461Z] 11960.40 IOPS, 46.72 MiB/s [2024-10-30T13:12:57.461Z] 12108.33 IOPS, 47.30 MiB/s [2024-10-30T13:12:57.461Z] 12228.00 IOPS, 47.77 MiB/s [2024-10-30T13:12:57.461Z] 12314.00 IOPS, 48.10 MiB/s [2024-10-30T13:12:57.461Z] 12371.22 IOPS, 48.33 MiB/s [2024-10-30T13:12:57.461Z] 12418.80 IOPS, 48.51 MiB/s [2024-10-30T13:12:57.461Z] 12466.36 IOPS, 48.70 MiB/s [2024-10-30T13:12:57.461Z] [2024-10-30 14:12:42.327074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.162 [2024-10-30 14:12:42.327108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:59.162 [2024-10-30 14:12:42.327142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.162 [2024-10-30 14:12:42.327148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:59.162 [2024-10-30 14:12:42.327160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.162 [2024-10-30 14:12:42.327166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:59.162 [2024-10-30 14:12:42.327176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.162 [2024-10-30 14:12:42.327182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:59.162 [2024-10-30 14:12:42.327193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.162 [2024-10-30 14:12:42.327198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.162 [2024-10-30 14:12:42.327209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.162 [2024-10-30 14:12:42.327214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.162 [2024-10-30 14:12:42.327224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.162 [2024-10-30 14:12:42.327229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:59.162 [2024-10-30 14:12:42.327241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.162 [2024-10-30 14:12:42.327247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:59.162 [2024-10-30 14:12:42.327257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.162 [2024-10-30 14:12:42.327264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:59.162 [2024-10-30 14:12:42.327274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.162 [2024-10-30 14:12:42.327285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:59.162 [2024-10-30 14:12:42.327296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.162 [2024-10-30 14:12:42.327303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:59.162 [2024-10-30 14:12:42.327315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.162 [2024-10-30 14:12:42.327322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:59.162 [2024-10-30 14:12:42.327333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.162 [2024-10-30 14:12:42.327339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:59.162 [2024-10-30 14:12:42.327349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.162 [2024-10-30 14:12:42.327354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.162 [2024-10-30 14:12:42.327365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.162 [2024-10-30 14:12:42.327370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.162 [2024-10-30 14:12:42.327381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.162 [2024-10-30 14:12:42.327386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:59.162 [2024-10-30 14:12:42.327396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.162 [2024-10-30 14:12:42.327402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:59.162 [2024-10-30 14:12:42.327412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.162 [2024-10-30 14:12:42.327417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:59.162 [2024-10-30 14:12:42.327428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.162 [2024-10-30 14:12:42.327433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:59.162 [2024-10-30 14:12:42.327443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.162 [2024-10-30 14:12:42.327449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.327459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.327465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.327476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.327481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.327493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.327499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.328988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.328994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.329007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.329013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.329027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.329033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.329046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.329052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:59.163 [2024-10-30 14:12:42.329065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.163 [2024-10-30 14:12:42.329071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.329084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.329090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.329103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.329109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.329123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.329129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.329550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.329558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.329574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.329581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.329595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.329602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.329617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.329624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.329643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.329652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.329671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.329680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.329700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.329710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.329730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.329739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.329765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.329775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.329797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.329805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.329825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.329833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.329852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.329860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.329879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.329888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.329907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.329916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.329936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.329945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.329965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.329974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.329999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.330008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.330120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.330134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.330158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.330167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.330189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.330199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.330220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.330229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.330251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.330260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.330283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.330293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.330314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.330323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.330345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.330354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.330376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.330386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.330408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.330417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.330439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.330449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:59.164 [2024-10-30 14:12:42.330470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.164 [2024-10-30 14:12:42.330486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:42.330507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.165 [2024-10-30 14:12:42.330516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:42.330537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.165 [2024-10-30 14:12:42.330547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:42.330569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.165 [2024-10-30 14:12:42.330581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:42.330602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.165 [2024-10-30 14:12:42.330612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:42.330635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.165 [2024-10-30 14:12:42.330646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:42.330667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.165 [2024-10-30 14:12:42.330679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:42.330701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.165 [2024-10-30 14:12:42.330712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:42.330735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.165 [2024-10-30 14:12:42.330753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:59.165 12308.75 IOPS, 48.08 MiB/s [2024-10-30T13:12:57.464Z] 11361.92 IOPS, 44.38 MiB/s [2024-10-30T13:12:57.464Z] 10550.36 IOPS, 41.21 MiB/s [2024-10-30T13:12:57.464Z] 9996.67 IOPS, 39.05 MiB/s [2024-10-30T13:12:57.464Z] 10175.25 IOPS, 39.75 MiB/s [2024-10-30T13:12:57.464Z] 10347.65 IOPS, 40.42 MiB/s [2024-10-30T13:12:57.464Z] 10726.72 IOPS, 41.90 MiB/s [2024-10-30T13:12:57.464Z] 11057.68 IOPS, 43.19 MiB/s [2024-10-30T13:12:57.464Z] 11247.95 IOPS, 43.94 MiB/s [2024-10-30T13:12:57.464Z] 11323.29 IOPS, 44.23 MiB/s [2024-10-30T13:12:57.464Z] 11392.64 IOPS, 44.50 MiB/s [2024-10-30T13:12:57.464Z] 11615.87 IOPS, 45.37 MiB/s [2024-10-30T13:12:57.464Z] 11834.50 IOPS, 46.23 MiB/s [2024-10-30T13:12:57.464Z] [2024-10-30 14:12:55.058389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.165 [2024-10-30 14:12:55.058426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:55.058457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:115792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.165 [2024-10-30 14:12:55.058464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:55.058475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.165 [2024-10-30 14:12:55.058485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:55.058496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.165 [2024-10-30 14:12:55.058502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:55.058512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:115688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.165 [2024-10-30 14:12:55.058517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:55.058528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.165 [2024-10-30 14:12:55.058534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:55.058544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:115832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.165 [2024-10-30 14:12:55.058549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:55.058560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.165 [2024-10-30 14:12:55.058566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:55.058576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:115632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.165 [2024-10-30 14:12:55.058582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:55.058592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.165 [2024-10-30 14:12:55.058597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:55.058608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:115696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.165 [2024-10-30 14:12:55.058614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:55.060319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:115840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.165 [2024-10-30 14:12:55.060336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:55.060350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.165 [2024-10-30 14:12:55.060356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:55.060367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:115872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.165 [2024-10-30 14:12:55.060373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:55.060383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.165 [2024-10-30 14:12:55.060392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:55.060403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:115904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.165 [2024-10-30 14:12:55.060409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:55.060419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:115920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.165 [2024-10-30 14:12:55.060425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:55.060435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:115936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.165 [2024-10-30 14:12:55.060440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:55.060451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:115952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.165 [2024-10-30 14:12:55.060456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:55.060467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:115968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.165 [2024-10-30 14:12:55.060473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:55.060483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:115984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.165 [2024-10-30 14:12:55.060489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:55.060499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:116000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.165 [2024-10-30 14:12:55.060504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:59.165 [2024-10-30 14:12:55.060515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:116016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.166 [2024-10-30 14:12:55.060520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:59.166 [2024-10-30 14:12:55.060531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:116032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.166 [2024-10-30 14:12:55.060536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.166 [2024-10-30 14:12:55.060547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.166 [2024-10-30 14:12:55.060553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:59.166 [2024-10-30 14:12:55.061420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:116040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.166 [2024-10-30 14:12:55.061432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:59.166 [2024-10-30 14:12:55.061445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:116056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.166 [2024-10-30 14:12:55.061454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.166 [2024-10-30 14:12:55.061465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:116072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.166 [2024-10-30 14:12:55.061470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.166 [2024-10-30 14:12:55.061481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:116088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.166 [2024-10-30 14:12:55.061486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.166 [2024-10-30 14:12:55.061496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:116104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.166 [2024-10-30 14:12:55.061501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:59.166 [2024-10-30 14:12:55.061512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:116120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.166 [2024-10-30 14:12:55.061518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:59.166 [2024-10-30 14:12:55.061528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:116136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.166 [2024-10-30 14:12:55.061534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:59.166 [2024-10-30 14:12:55.061544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.166 [2024-10-30 14:12:55.061550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:59.166 11967.80 IOPS, 46.75 MiB/s [2024-10-30T13:12:57.465Z] 12000.73 IOPS, 46.88 MiB/s [2024-10-30T13:12:57.465Z] Received shutdown signal, test time was about 26.630497 seconds 00:25:59.166 00:25:59.166 Latency(us) 00:25:59.166 [2024-10-30T13:12:57.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.166 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:59.166 Verification LBA range: start 0x0 length 0x4000 00:25:59.166 Nvme0n1 : 26.63 12015.37 46.94 0.00 0.00 10634.54 583.68 3019898.88 00:25:59.166 [2024-10-30T13:12:57.465Z] =================================================================================================================== 00:25:59.166 [2024-10-30T13:12:57.465Z] Total : 12015.37 46.94 0.00 0.00 10634.54 583.68 3019898.88 00:25:59.166 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:59.166 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:59.166 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:59.166 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:59.166 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:59.166 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:59.166 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:59.166 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:59.166 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:59.166 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:59.166 rmmod nvme_tcp 00:25:59.427 rmmod nvme_fabrics 00:25:59.427 rmmod nvme_keyring 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1149740 ']' 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1149740 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1149740 ']' 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1149740 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1149740 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1149740' 00:25:59.427 killing process with pid 1149740 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1149740 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1149740 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.427 14:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.972 14:12:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:01.972 00:26:01.972 real 0m41.121s 00:26:01.972 user 1m45.904s 00:26:01.972 sys 0m11.674s 00:26:01.972 14:12:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:01.972 14:12:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:01.972 ************************************ 00:26:01.972 END TEST nvmf_host_multipath_status 00:26:01.972 ************************************ 00:26:01.972 14:12:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:01.972 14:12:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:01.972 14:12:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:01.972 14:12:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.972 ************************************ 00:26:01.972 START TEST nvmf_discovery_remove_ifc 00:26:01.972 ************************************ 00:26:01.972 14:12:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:01.972 * Looking for test storage... 00:26:01.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:01.972 14:12:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:01.972 14:12:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:26:01.972 14:12:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:01.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.972 --rc genhtml_branch_coverage=1 00:26:01.972 --rc genhtml_function_coverage=1 00:26:01.972 --rc genhtml_legend=1 00:26:01.972 --rc geninfo_all_blocks=1 00:26:01.972 --rc geninfo_unexecuted_blocks=1 00:26:01.972 00:26:01.972 ' 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:01.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.972 --rc genhtml_branch_coverage=1 00:26:01.972 --rc genhtml_function_coverage=1 00:26:01.972 --rc genhtml_legend=1 00:26:01.972 --rc geninfo_all_blocks=1 00:26:01.972 --rc geninfo_unexecuted_blocks=1 00:26:01.972 00:26:01.972 ' 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:01.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.972 --rc genhtml_branch_coverage=1 00:26:01.972 --rc genhtml_function_coverage=1 00:26:01.972 --rc genhtml_legend=1 00:26:01.972 --rc geninfo_all_blocks=1 00:26:01.972 --rc geninfo_unexecuted_blocks=1 00:26:01.972 00:26:01.972 ' 00:26:01.972 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:01.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.972 --rc genhtml_branch_coverage=1 00:26:01.972 --rc genhtml_function_coverage=1 00:26:01.972 --rc genhtml_legend=1 00:26:01.972 --rc geninfo_all_blocks=1 00:26:01.973 --rc geninfo_unexecuted_blocks=1 00:26:01.973 00:26:01.973 ' 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:01.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:01.973 14:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:10.115 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:10.115 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:10.115 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:10.115 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:10.115 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:10.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:10.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.712 ms 00:26:10.116 00:26:10.116 --- 10.0.0.2 ping statistics --- 00:26:10.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.116 rtt min/avg/max/mdev = 0.712/0.712/0.712/0.000 ms 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:10.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:10.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:26:10.116 00:26:10.116 --- 10.0.0.1 ping statistics --- 00:26:10.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.116 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1160065 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1160065 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1160065 ']' 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:10.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:10.116 14:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.116 [2024-10-30 14:13:07.649406] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:26:10.116 [2024-10-30 14:13:07.649473] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:10.116 [2024-10-30 14:13:07.752570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.116 [2024-10-30 14:13:07.802934] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:10.116 [2024-10-30 14:13:07.802992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:10.116 [2024-10-30 14:13:07.803001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:10.116 [2024-10-30 14:13:07.803009] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:10.116 [2024-10-30 14:13:07.803016] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:10.116 [2024-10-30 14:13:07.803829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.378 14:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:10.378 14:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:10.378 14:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:10.378 14:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:10.378 14:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.378 14:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:10.378 14:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:10.378 14:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.378 14:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.378 [2024-10-30 14:13:08.533996] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:10.378 [2024-10-30 14:13:08.542290] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:10.378 null0 00:26:10.378 [2024-10-30 14:13:08.574204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.378 14:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.378 14:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1160347 00:26:10.378 14:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1160347 /tmp/host.sock 00:26:10.378 14:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:10.378 14:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1160347 ']' 00:26:10.378 14:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:10.378 14:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:10.378 14:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:10.378 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:10.378 14:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:10.378 14:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.378 [2024-10-30 14:13:08.661671] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:26:10.378 [2024-10-30 14:13:08.661735] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160347 ] 00:26:10.640 [2024-10-30 14:13:08.757904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.640 [2024-10-30 14:13:08.810275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.214 14:13:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:11.214 14:13:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:11.214 14:13:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:11.214 14:13:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:11.214 14:13:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.214 14:13:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:11.214 14:13:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.214 14:13:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:11.214 14:13:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.214 14:13:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:11.477 14:13:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.477 14:13:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:11.477 14:13:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.477 14:13:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.436 [2024-10-30 14:13:10.594297] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:12.436 [2024-10-30 14:13:10.594338] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:12.436 [2024-10-30 14:13:10.594354] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:12.436 [2024-10-30 14:13:10.724770] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:12.698 [2024-10-30 14:13:10.905276] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:12.698 [2024-10-30 14:13:10.906504] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x161c590:1 started. 00:26:12.698 [2024-10-30 14:13:10.908326] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:12.698 [2024-10-30 14:13:10.908400] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:12.698 [2024-10-30 14:13:10.908426] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:12.698 [2024-10-30 14:13:10.908445] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:12.698 [2024-10-30 14:13:10.908472] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:12.698 14:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.698 14:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:12.698 [2024-10-30 14:13:10.912802] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x161c590 was disconnected and freed. delete nvme_qpair. 00:26:12.698 14:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:12.698 14:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:12.698 14:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:12.698 14:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.698 14:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.698 14:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:12.698 14:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:12.698 14:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.698 14:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:12.698 14:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:12.698 14:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:12.960 14:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:12.960 14:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:12.960 14:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:12.960 14:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:12.960 14:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.960 14:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.960 14:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:12.960 14:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:12.960 14:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.960 14:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:12.960 14:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:13.902 14:13:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:13.902 14:13:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:13.902 14:13:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:13.902 14:13:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.902 14:13:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:13.902 14:13:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.902 14:13:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:13.902 14:13:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.902 14:13:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:13.902 14:13:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:15.286 14:13:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:15.286 14:13:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.286 14:13:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:15.286 14:13:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.286 14:13:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:15.286 14:13:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:15.286 14:13:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:15.286 14:13:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.286 14:13:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:15.286 14:13:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:16.227 14:13:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:16.227 14:13:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.227 14:13:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:16.227 14:13:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.227 14:13:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:16.227 14:13:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:16.227 14:13:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:16.227 14:13:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.227 14:13:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:16.227 14:13:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:17.167 14:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:17.167 14:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:17.167 14:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:17.167 14:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.167 14:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:17.167 14:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:17.167 14:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:17.168 14:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.168 14:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:17.168 14:13:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:18.108 [2024-10-30 14:13:16.348485] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:18.108 [2024-10-30 14:13:16.348519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.108 [2024-10-30 14:13:16.348528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.108 [2024-10-30 14:13:16.348536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.108 [2024-10-30 14:13:16.348541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.108 [2024-10-30 14:13:16.348547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.108 [2024-10-30 14:13:16.348553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.108 [2024-10-30 14:13:16.348558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.108 [2024-10-30 14:13:16.348563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.108 [2024-10-30 14:13:16.348569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.108 [2024-10-30 14:13:16.348575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.108 [2024-10-30 14:13:16.348580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f8e00 is same with the state(6) to be set 00:26:18.108 14:13:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:18.108 [2024-10-30 14:13:16.358506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f8e00 (9): Bad file descriptor 00:26:18.109 14:13:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:18.109 14:13:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:18.109 14:13:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.109 14:13:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:18.109 14:13:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.109 14:13:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:18.109 [2024-10-30 14:13:16.368540] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:18.109 [2024-10-30 14:13:16.368551] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:18.109 [2024-10-30 14:13:16.368555] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:18.109 [2024-10-30 14:13:16.368559] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:18.109 [2024-10-30 14:13:16.368574] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:19.492 [2024-10-30 14:13:17.428829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:19.492 [2024-10-30 14:13:17.428922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f8e00 with addr=10.0.0.2, port=4420 00:26:19.492 [2024-10-30 14:13:17.428954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f8e00 is same with the state(6) to be set 00:26:19.492 [2024-10-30 14:13:17.429012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f8e00 (9): Bad file descriptor 00:26:19.492 [2024-10-30 14:13:17.429159] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:19.492 [2024-10-30 14:13:17.429219] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:19.492 [2024-10-30 14:13:17.429242] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:19.492 [2024-10-30 14:13:17.429267] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:19.492 [2024-10-30 14:13:17.429289] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:19.492 [2024-10-30 14:13:17.429306] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:19.492 [2024-10-30 14:13:17.429341] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:19.492 [2024-10-30 14:13:17.429365] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:19.492 [2024-10-30 14:13:17.429380] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:19.492 14:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.492 14:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:19.492 14:13:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:20.434 [2024-10-30 14:13:18.431781] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:20.434 [2024-10-30 14:13:18.431795] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:20.434 [2024-10-30 14:13:18.431804] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:20.434 [2024-10-30 14:13:18.431809] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:20.434 [2024-10-30 14:13:18.431815] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:20.434 [2024-10-30 14:13:18.431820] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:20.434 [2024-10-30 14:13:18.431824] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:20.434 [2024-10-30 14:13:18.431831] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:20.434 [2024-10-30 14:13:18.431848] bdev_nvme.c:7042:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:20.434 [2024-10-30 14:13:18.431864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.434 [2024-10-30 14:13:18.431872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.434 [2024-10-30 14:13:18.431879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.434 [2024-10-30 14:13:18.431885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.434 [2024-10-30 14:13:18.431894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.435 [2024-10-30 14:13:18.431900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.435 [2024-10-30 14:13:18.431906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.435 [2024-10-30 14:13:18.431911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.435 [2024-10-30 14:13:18.431917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.435 [2024-10-30 14:13:18.431922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.435 [2024-10-30 14:13:18.431927] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:20.435 [2024-10-30 14:13:18.432072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8540 (9): Bad file descriptor 00:26:20.435 [2024-10-30 14:13:18.433082] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:20.435 [2024-10-30 14:13:18.433091] nvme_ctrlr.c:1190:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:20.435 14:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:20.435 14:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:20.435 14:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:20.435 14:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.435 14:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:20.435 14:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.435 14:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:20.435 14:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.435 14:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:20.435 14:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:20.435 14:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:20.435 14:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:20.435 14:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:20.435 14:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:20.435 14:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:20.435 14:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.435 14:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:20.435 14:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.435 14:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:20.435 14:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.435 14:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:20.435 14:13:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:21.375 14:13:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:21.375 14:13:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.375 14:13:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:21.375 14:13:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.375 14:13:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.375 14:13:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:21.375 14:13:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:21.375 14:13:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.635 14:13:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:21.635 14:13:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:22.204 [2024-10-30 14:13:20.487760] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:22.204 [2024-10-30 14:13:20.487780] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:22.204 [2024-10-30 14:13:20.487791] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:22.465 [2024-10-30 14:13:20.616153] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:22.465 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:22.465 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:22.465 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.465 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.465 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:22.465 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:22.465 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:22.465 [2024-10-30 14:13:20.717913] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:22.465 [2024-10-30 14:13:20.718604] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x15f44c0:1 started. 00:26:22.465 [2024-10-30 14:13:20.719509] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:22.465 [2024-10-30 14:13:20.719539] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:22.465 [2024-10-30 14:13:20.719555] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:22.465 [2024-10-30 14:13:20.719568] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:22.465 [2024-10-30 14:13:20.719574] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:22.465 [2024-10-30 14:13:20.725953] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x15f44c0 was disconnected and freed. delete nvme_qpair. 00:26:22.465 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.465 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:22.465 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:22.465 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1160347 00:26:22.465 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1160347 ']' 00:26:22.465 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1160347 00:26:22.465 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:22.465 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:22.465 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1160347 00:26:22.727 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:22.727 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:22.727 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1160347' 00:26:22.727 killing process with pid 1160347 00:26:22.727 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1160347 00:26:22.727 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1160347 00:26:22.727 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:22.727 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:22.727 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:22.727 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:22.727 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:22.727 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:22.727 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:22.727 rmmod nvme_tcp 00:26:22.727 rmmod nvme_fabrics 00:26:22.727 rmmod nvme_keyring 00:26:22.727 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:22.727 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:22.727 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:22.727 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1160065 ']' 00:26:22.727 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1160065 00:26:22.727 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1160065 ']' 00:26:22.727 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1160065 00:26:22.727 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:22.727 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:22.727 14:13:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1160065 00:26:22.987 14:13:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:22.987 14:13:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:22.987 14:13:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1160065' 00:26:22.987 killing process with pid 1160065 00:26:22.987 14:13:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1160065 00:26:22.987 14:13:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1160065 00:26:22.987 14:13:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:22.987 14:13:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:22.987 14:13:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:22.987 14:13:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:22.987 14:13:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:22.987 14:13:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:22.987 14:13:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:22.987 14:13:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:22.987 14:13:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:22.987 14:13:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.987 14:13:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:22.987 14:13:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:25.531 00:26:25.531 real 0m23.391s 00:26:25.531 user 0m27.400s 00:26:25.531 sys 0m7.062s 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.531 ************************************ 00:26:25.531 END TEST nvmf_discovery_remove_ifc 00:26:25.531 ************************************ 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.531 ************************************ 00:26:25.531 START TEST nvmf_identify_kernel_target 00:26:25.531 ************************************ 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:25.531 * Looking for test storage... 00:26:25.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:25.531 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:25.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.532 --rc genhtml_branch_coverage=1 00:26:25.532 --rc genhtml_function_coverage=1 00:26:25.532 --rc genhtml_legend=1 00:26:25.532 --rc geninfo_all_blocks=1 00:26:25.532 --rc geninfo_unexecuted_blocks=1 00:26:25.532 00:26:25.532 ' 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:25.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.532 --rc genhtml_branch_coverage=1 00:26:25.532 --rc genhtml_function_coverage=1 00:26:25.532 --rc genhtml_legend=1 00:26:25.532 --rc geninfo_all_blocks=1 00:26:25.532 --rc geninfo_unexecuted_blocks=1 00:26:25.532 00:26:25.532 ' 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:25.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.532 --rc genhtml_branch_coverage=1 00:26:25.532 --rc genhtml_function_coverage=1 00:26:25.532 --rc genhtml_legend=1 00:26:25.532 --rc geninfo_all_blocks=1 00:26:25.532 --rc geninfo_unexecuted_blocks=1 00:26:25.532 00:26:25.532 ' 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:25.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.532 --rc genhtml_branch_coverage=1 00:26:25.532 --rc genhtml_function_coverage=1 00:26:25.532 --rc genhtml_legend=1 00:26:25.532 --rc geninfo_all_blocks=1 00:26:25.532 --rc geninfo_unexecuted_blocks=1 00:26:25.532 00:26:25.532 ' 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:25.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:25.532 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:33.673 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:33.673 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:33.673 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:33.674 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:33.674 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:33.674 14:13:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:33.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:33.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:26:33.674 00:26:33.674 --- 10.0.0.2 ping statistics --- 00:26:33.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.674 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:33.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:33.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:26:33.674 00:26:33.674 --- 10.0.0.1 ping statistics --- 00:26:33.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.674 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:33.674 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:33.675 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:26:33.675 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:33.675 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:33.675 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:33.675 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:36.977 Waiting for block devices as requested 00:26:36.977 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:36.977 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:36.977 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:36.977 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:36.977 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:36.977 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:36.977 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:36.977 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:37.238 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:37.238 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:37.499 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:37.499 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:37.499 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:37.760 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:37.760 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:37.760 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:38.021 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:38.282 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:38.282 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:38.282 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:38.282 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:38.282 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:38.282 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:38.282 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:38.282 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:38.282 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:38.282 No valid GPT data, bailing 00:26:38.282 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:38.282 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:38.282 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:38.282 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:38.282 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:38.282 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:38.282 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:38.283 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:38.283 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:38.283 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:38.283 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:38.283 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:26:38.283 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:38.283 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:26:38.283 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:26:38.283 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:26:38.283 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:38.283 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:26:38.283 00:26:38.283 Discovery Log Number of Records 2, Generation counter 2 00:26:38.283 =====Discovery Log Entry 0====== 00:26:38.283 trtype: tcp 00:26:38.283 adrfam: ipv4 00:26:38.283 subtype: current discovery subsystem 00:26:38.283 treq: not specified, sq flow control disable supported 00:26:38.283 portid: 1 00:26:38.283 trsvcid: 4420 00:26:38.283 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:38.283 traddr: 10.0.0.1 00:26:38.283 eflags: none 00:26:38.283 sectype: none 00:26:38.283 =====Discovery Log Entry 1====== 00:26:38.283 trtype: tcp 00:26:38.283 adrfam: ipv4 00:26:38.283 subtype: nvme subsystem 00:26:38.283 treq: not specified, sq flow control disable supported 00:26:38.283 portid: 1 00:26:38.283 trsvcid: 4420 00:26:38.283 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:38.283 traddr: 10.0.0.1 00:26:38.283 eflags: none 00:26:38.283 sectype: none 00:26:38.283 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:38.283 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:38.545 ===================================================== 00:26:38.545 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:38.545 ===================================================== 00:26:38.545 Controller Capabilities/Features 00:26:38.545 ================================ 00:26:38.545 Vendor ID: 0000 00:26:38.545 Subsystem Vendor ID: 0000 00:26:38.545 Serial Number: 4af1232475db708af5db 00:26:38.545 Model Number: Linux 00:26:38.545 Firmware Version: 6.8.9-20 00:26:38.545 Recommended Arb Burst: 0 00:26:38.545 IEEE OUI Identifier: 00 00 00 00:26:38.545 Multi-path I/O 00:26:38.545 May have multiple subsystem ports: No 00:26:38.545 May have multiple controllers: No 00:26:38.545 Associated with SR-IOV VF: No 00:26:38.545 Max Data Transfer Size: Unlimited 00:26:38.545 Max Number of Namespaces: 0 00:26:38.545 Max Number of I/O Queues: 1024 00:26:38.545 NVMe Specification Version (VS): 1.3 00:26:38.545 NVMe Specification Version (Identify): 1.3 00:26:38.545 Maximum Queue Entries: 1024 00:26:38.545 Contiguous Queues Required: No 00:26:38.545 Arbitration Mechanisms Supported 00:26:38.545 Weighted Round Robin: Not Supported 00:26:38.545 Vendor Specific: Not Supported 00:26:38.545 Reset Timeout: 7500 ms 00:26:38.545 Doorbell Stride: 4 bytes 00:26:38.545 NVM Subsystem Reset: Not Supported 00:26:38.545 Command Sets Supported 00:26:38.545 NVM Command Set: Supported 00:26:38.545 Boot Partition: Not Supported 00:26:38.545 Memory Page Size Minimum: 4096 bytes 00:26:38.545 Memory Page Size Maximum: 4096 bytes 00:26:38.545 Persistent Memory Region: Not Supported 00:26:38.545 Optional Asynchronous Events Supported 00:26:38.545 Namespace Attribute Notices: Not Supported 00:26:38.545 Firmware Activation Notices: Not Supported 00:26:38.545 ANA Change Notices: Not Supported 00:26:38.545 PLE Aggregate Log Change Notices: Not Supported 00:26:38.545 LBA Status Info Alert Notices: Not Supported 00:26:38.545 EGE Aggregate Log Change Notices: Not Supported 00:26:38.545 Normal NVM Subsystem Shutdown event: Not Supported 00:26:38.545 Zone Descriptor Change Notices: Not Supported 00:26:38.545 Discovery Log Change Notices: Supported 00:26:38.545 Controller Attributes 00:26:38.545 128-bit Host Identifier: Not Supported 00:26:38.545 Non-Operational Permissive Mode: Not Supported 00:26:38.545 NVM Sets: Not Supported 00:26:38.545 Read Recovery Levels: Not Supported 00:26:38.545 Endurance Groups: Not Supported 00:26:38.545 Predictable Latency Mode: Not Supported 00:26:38.545 Traffic Based Keep ALive: Not Supported 00:26:38.545 Namespace Granularity: Not Supported 00:26:38.545 SQ Associations: Not Supported 00:26:38.545 UUID List: Not Supported 00:26:38.545 Multi-Domain Subsystem: Not Supported 00:26:38.545 Fixed Capacity Management: Not Supported 00:26:38.545 Variable Capacity Management: Not Supported 00:26:38.545 Delete Endurance Group: Not Supported 00:26:38.545 Delete NVM Set: Not Supported 00:26:38.545 Extended LBA Formats Supported: Not Supported 00:26:38.545 Flexible Data Placement Supported: Not Supported 00:26:38.545 00:26:38.545 Controller Memory Buffer Support 00:26:38.545 ================================ 00:26:38.545 Supported: No 00:26:38.545 00:26:38.545 Persistent Memory Region Support 00:26:38.545 ================================ 00:26:38.545 Supported: No 00:26:38.545 00:26:38.545 Admin Command Set Attributes 00:26:38.545 ============================ 00:26:38.545 Security Send/Receive: Not Supported 00:26:38.545 Format NVM: Not Supported 00:26:38.545 Firmware Activate/Download: Not Supported 00:26:38.545 Namespace Management: Not Supported 00:26:38.546 Device Self-Test: Not Supported 00:26:38.546 Directives: Not Supported 00:26:38.546 NVMe-MI: Not Supported 00:26:38.546 Virtualization Management: Not Supported 00:26:38.546 Doorbell Buffer Config: Not Supported 00:26:38.546 Get LBA Status Capability: Not Supported 00:26:38.546 Command & Feature Lockdown Capability: Not Supported 00:26:38.546 Abort Command Limit: 1 00:26:38.546 Async Event Request Limit: 1 00:26:38.546 Number of Firmware Slots: N/A 00:26:38.546 Firmware Slot 1 Read-Only: N/A 00:26:38.546 Firmware Activation Without Reset: N/A 00:26:38.546 Multiple Update Detection Support: N/A 00:26:38.546 Firmware Update Granularity: No Information Provided 00:26:38.546 Per-Namespace SMART Log: No 00:26:38.546 Asymmetric Namespace Access Log Page: Not Supported 00:26:38.546 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:38.546 Command Effects Log Page: Not Supported 00:26:38.546 Get Log Page Extended Data: Supported 00:26:38.546 Telemetry Log Pages: Not Supported 00:26:38.546 Persistent Event Log Pages: Not Supported 00:26:38.546 Supported Log Pages Log Page: May Support 00:26:38.546 Commands Supported & Effects Log Page: Not Supported 00:26:38.546 Feature Identifiers & Effects Log Page:May Support 00:26:38.546 NVMe-MI Commands & Effects Log Page: May Support 00:26:38.546 Data Area 4 for Telemetry Log: Not Supported 00:26:38.546 Error Log Page Entries Supported: 1 00:26:38.546 Keep Alive: Not Supported 00:26:38.546 00:26:38.546 NVM Command Set Attributes 00:26:38.546 ========================== 00:26:38.546 Submission Queue Entry Size 00:26:38.546 Max: 1 00:26:38.546 Min: 1 00:26:38.546 Completion Queue Entry Size 00:26:38.546 Max: 1 00:26:38.546 Min: 1 00:26:38.546 Number of Namespaces: 0 00:26:38.546 Compare Command: Not Supported 00:26:38.546 Write Uncorrectable Command: Not Supported 00:26:38.546 Dataset Management Command: Not Supported 00:26:38.546 Write Zeroes Command: Not Supported 00:26:38.546 Set Features Save Field: Not Supported 00:26:38.546 Reservations: Not Supported 00:26:38.546 Timestamp: Not Supported 00:26:38.546 Copy: Not Supported 00:26:38.546 Volatile Write Cache: Not Present 00:26:38.546 Atomic Write Unit (Normal): 1 00:26:38.546 Atomic Write Unit (PFail): 1 00:26:38.546 Atomic Compare & Write Unit: 1 00:26:38.546 Fused Compare & Write: Not Supported 00:26:38.546 Scatter-Gather List 00:26:38.546 SGL Command Set: Supported 00:26:38.546 SGL Keyed: Not Supported 00:26:38.546 SGL Bit Bucket Descriptor: Not Supported 00:26:38.546 SGL Metadata Pointer: Not Supported 00:26:38.546 Oversized SGL: Not Supported 00:26:38.546 SGL Metadata Address: Not Supported 00:26:38.546 SGL Offset: Supported 00:26:38.546 Transport SGL Data Block: Not Supported 00:26:38.546 Replay Protected Memory Block: Not Supported 00:26:38.546 00:26:38.546 Firmware Slot Information 00:26:38.546 ========================= 00:26:38.546 Active slot: 0 00:26:38.546 00:26:38.546 00:26:38.546 Error Log 00:26:38.546 ========= 00:26:38.546 00:26:38.546 Active Namespaces 00:26:38.546 ================= 00:26:38.546 Discovery Log Page 00:26:38.546 ================== 00:26:38.546 Generation Counter: 2 00:26:38.546 Number of Records: 2 00:26:38.546 Record Format: 0 00:26:38.546 00:26:38.546 Discovery Log Entry 0 00:26:38.546 ---------------------- 00:26:38.546 Transport Type: 3 (TCP) 00:26:38.546 Address Family: 1 (IPv4) 00:26:38.546 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:38.546 Entry Flags: 00:26:38.546 Duplicate Returned Information: 0 00:26:38.546 Explicit Persistent Connection Support for Discovery: 0 00:26:38.546 Transport Requirements: 00:26:38.546 Secure Channel: Not Specified 00:26:38.546 Port ID: 1 (0x0001) 00:26:38.546 Controller ID: 65535 (0xffff) 00:26:38.546 Admin Max SQ Size: 32 00:26:38.546 Transport Service Identifier: 4420 00:26:38.546 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:38.546 Transport Address: 10.0.0.1 00:26:38.546 Discovery Log Entry 1 00:26:38.546 ---------------------- 00:26:38.546 Transport Type: 3 (TCP) 00:26:38.546 Address Family: 1 (IPv4) 00:26:38.546 Subsystem Type: 2 (NVM Subsystem) 00:26:38.546 Entry Flags: 00:26:38.546 Duplicate Returned Information: 0 00:26:38.546 Explicit Persistent Connection Support for Discovery: 0 00:26:38.546 Transport Requirements: 00:26:38.546 Secure Channel: Not Specified 00:26:38.546 Port ID: 1 (0x0001) 00:26:38.546 Controller ID: 65535 (0xffff) 00:26:38.546 Admin Max SQ Size: 32 00:26:38.546 Transport Service Identifier: 4420 00:26:38.546 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:38.546 Transport Address: 10.0.0.1 00:26:38.546 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:38.546 get_feature(0x01) failed 00:26:38.546 get_feature(0x02) failed 00:26:38.546 get_feature(0x04) failed 00:26:38.546 ===================================================== 00:26:38.546 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:38.546 ===================================================== 00:26:38.546 Controller Capabilities/Features 00:26:38.546 ================================ 00:26:38.546 Vendor ID: 0000 00:26:38.546 Subsystem Vendor ID: 0000 00:26:38.546 Serial Number: 4bb80e19ae4521007c49 00:26:38.546 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:38.546 Firmware Version: 6.8.9-20 00:26:38.546 Recommended Arb Burst: 6 00:26:38.546 IEEE OUI Identifier: 00 00 00 00:26:38.546 Multi-path I/O 00:26:38.546 May have multiple subsystem ports: Yes 00:26:38.546 May have multiple controllers: Yes 00:26:38.546 Associated with SR-IOV VF: No 00:26:38.546 Max Data Transfer Size: Unlimited 00:26:38.546 Max Number of Namespaces: 1024 00:26:38.546 Max Number of I/O Queues: 128 00:26:38.546 NVMe Specification Version (VS): 1.3 00:26:38.546 NVMe Specification Version (Identify): 1.3 00:26:38.546 Maximum Queue Entries: 1024 00:26:38.546 Contiguous Queues Required: No 00:26:38.546 Arbitration Mechanisms Supported 00:26:38.546 Weighted Round Robin: Not Supported 00:26:38.546 Vendor Specific: Not Supported 00:26:38.546 Reset Timeout: 7500 ms 00:26:38.546 Doorbell Stride: 4 bytes 00:26:38.546 NVM Subsystem Reset: Not Supported 00:26:38.546 Command Sets Supported 00:26:38.546 NVM Command Set: Supported 00:26:38.546 Boot Partition: Not Supported 00:26:38.546 Memory Page Size Minimum: 4096 bytes 00:26:38.546 Memory Page Size Maximum: 4096 bytes 00:26:38.546 Persistent Memory Region: Not Supported 00:26:38.546 Optional Asynchronous Events Supported 00:26:38.546 Namespace Attribute Notices: Supported 00:26:38.546 Firmware Activation Notices: Not Supported 00:26:38.546 ANA Change Notices: Supported 00:26:38.546 PLE Aggregate Log Change Notices: Not Supported 00:26:38.546 LBA Status Info Alert Notices: Not Supported 00:26:38.546 EGE Aggregate Log Change Notices: Not Supported 00:26:38.546 Normal NVM Subsystem Shutdown event: Not Supported 00:26:38.546 Zone Descriptor Change Notices: Not Supported 00:26:38.546 Discovery Log Change Notices: Not Supported 00:26:38.546 Controller Attributes 00:26:38.546 128-bit Host Identifier: Supported 00:26:38.546 Non-Operational Permissive Mode: Not Supported 00:26:38.546 NVM Sets: Not Supported 00:26:38.546 Read Recovery Levels: Not Supported 00:26:38.546 Endurance Groups: Not Supported 00:26:38.546 Predictable Latency Mode: Not Supported 00:26:38.546 Traffic Based Keep ALive: Supported 00:26:38.546 Namespace Granularity: Not Supported 00:26:38.546 SQ Associations: Not Supported 00:26:38.546 UUID List: Not Supported 00:26:38.546 Multi-Domain Subsystem: Not Supported 00:26:38.546 Fixed Capacity Management: Not Supported 00:26:38.546 Variable Capacity Management: Not Supported 00:26:38.546 Delete Endurance Group: Not Supported 00:26:38.546 Delete NVM Set: Not Supported 00:26:38.546 Extended LBA Formats Supported: Not Supported 00:26:38.546 Flexible Data Placement Supported: Not Supported 00:26:38.546 00:26:38.546 Controller Memory Buffer Support 00:26:38.546 ================================ 00:26:38.546 Supported: No 00:26:38.546 00:26:38.546 Persistent Memory Region Support 00:26:38.546 ================================ 00:26:38.546 Supported: No 00:26:38.546 00:26:38.546 Admin Command Set Attributes 00:26:38.546 ============================ 00:26:38.546 Security Send/Receive: Not Supported 00:26:38.546 Format NVM: Not Supported 00:26:38.546 Firmware Activate/Download: Not Supported 00:26:38.546 Namespace Management: Not Supported 00:26:38.546 Device Self-Test: Not Supported 00:26:38.546 Directives: Not Supported 00:26:38.546 NVMe-MI: Not Supported 00:26:38.546 Virtualization Management: Not Supported 00:26:38.546 Doorbell Buffer Config: Not Supported 00:26:38.546 Get LBA Status Capability: Not Supported 00:26:38.546 Command & Feature Lockdown Capability: Not Supported 00:26:38.546 Abort Command Limit: 4 00:26:38.546 Async Event Request Limit: 4 00:26:38.546 Number of Firmware Slots: N/A 00:26:38.546 Firmware Slot 1 Read-Only: N/A 00:26:38.546 Firmware Activation Without Reset: N/A 00:26:38.546 Multiple Update Detection Support: N/A 00:26:38.547 Firmware Update Granularity: No Information Provided 00:26:38.547 Per-Namespace SMART Log: Yes 00:26:38.547 Asymmetric Namespace Access Log Page: Supported 00:26:38.547 ANA Transition Time : 10 sec 00:26:38.547 00:26:38.547 Asymmetric Namespace Access Capabilities 00:26:38.547 ANA Optimized State : Supported 00:26:38.547 ANA Non-Optimized State : Supported 00:26:38.547 ANA Inaccessible State : Supported 00:26:38.547 ANA Persistent Loss State : Supported 00:26:38.547 ANA Change State : Supported 00:26:38.547 ANAGRPID is not changed : No 00:26:38.547 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:38.547 00:26:38.547 ANA Group Identifier Maximum : 128 00:26:38.547 Number of ANA Group Identifiers : 128 00:26:38.547 Max Number of Allowed Namespaces : 1024 00:26:38.547 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:38.547 Command Effects Log Page: Supported 00:26:38.547 Get Log Page Extended Data: Supported 00:26:38.547 Telemetry Log Pages: Not Supported 00:26:38.547 Persistent Event Log Pages: Not Supported 00:26:38.547 Supported Log Pages Log Page: May Support 00:26:38.547 Commands Supported & Effects Log Page: Not Supported 00:26:38.547 Feature Identifiers & Effects Log Page:May Support 00:26:38.547 NVMe-MI Commands & Effects Log Page: May Support 00:26:38.547 Data Area 4 for Telemetry Log: Not Supported 00:26:38.547 Error Log Page Entries Supported: 128 00:26:38.547 Keep Alive: Supported 00:26:38.547 Keep Alive Granularity: 1000 ms 00:26:38.547 00:26:38.547 NVM Command Set Attributes 00:26:38.547 ========================== 00:26:38.547 Submission Queue Entry Size 00:26:38.547 Max: 64 00:26:38.547 Min: 64 00:26:38.547 Completion Queue Entry Size 00:26:38.547 Max: 16 00:26:38.547 Min: 16 00:26:38.547 Number of Namespaces: 1024 00:26:38.547 Compare Command: Not Supported 00:26:38.547 Write Uncorrectable Command: Not Supported 00:26:38.547 Dataset Management Command: Supported 00:26:38.547 Write Zeroes Command: Supported 00:26:38.547 Set Features Save Field: Not Supported 00:26:38.547 Reservations: Not Supported 00:26:38.547 Timestamp: Not Supported 00:26:38.547 Copy: Not Supported 00:26:38.547 Volatile Write Cache: Present 00:26:38.547 Atomic Write Unit (Normal): 1 00:26:38.547 Atomic Write Unit (PFail): 1 00:26:38.547 Atomic Compare & Write Unit: 1 00:26:38.547 Fused Compare & Write: Not Supported 00:26:38.547 Scatter-Gather List 00:26:38.547 SGL Command Set: Supported 00:26:38.547 SGL Keyed: Not Supported 00:26:38.547 SGL Bit Bucket Descriptor: Not Supported 00:26:38.547 SGL Metadata Pointer: Not Supported 00:26:38.547 Oversized SGL: Not Supported 00:26:38.547 SGL Metadata Address: Not Supported 00:26:38.547 SGL Offset: Supported 00:26:38.547 Transport SGL Data Block: Not Supported 00:26:38.547 Replay Protected Memory Block: Not Supported 00:26:38.547 00:26:38.547 Firmware Slot Information 00:26:38.547 ========================= 00:26:38.547 Active slot: 0 00:26:38.547 00:26:38.547 Asymmetric Namespace Access 00:26:38.547 =========================== 00:26:38.547 Change Count : 0 00:26:38.547 Number of ANA Group Descriptors : 1 00:26:38.547 ANA Group Descriptor : 0 00:26:38.547 ANA Group ID : 1 00:26:38.547 Number of NSID Values : 1 00:26:38.547 Change Count : 0 00:26:38.547 ANA State : 1 00:26:38.547 Namespace Identifier : 1 00:26:38.547 00:26:38.547 Commands Supported and Effects 00:26:38.547 ============================== 00:26:38.547 Admin Commands 00:26:38.547 -------------- 00:26:38.547 Get Log Page (02h): Supported 00:26:38.547 Identify (06h): Supported 00:26:38.547 Abort (08h): Supported 00:26:38.547 Set Features (09h): Supported 00:26:38.547 Get Features (0Ah): Supported 00:26:38.547 Asynchronous Event Request (0Ch): Supported 00:26:38.547 Keep Alive (18h): Supported 00:26:38.547 I/O Commands 00:26:38.547 ------------ 00:26:38.547 Flush (00h): Supported 00:26:38.547 Write (01h): Supported LBA-Change 00:26:38.547 Read (02h): Supported 00:26:38.547 Write Zeroes (08h): Supported LBA-Change 00:26:38.547 Dataset Management (09h): Supported 00:26:38.547 00:26:38.547 Error Log 00:26:38.547 ========= 00:26:38.547 Entry: 0 00:26:38.547 Error Count: 0x3 00:26:38.547 Submission Queue Id: 0x0 00:26:38.547 Command Id: 0x5 00:26:38.547 Phase Bit: 0 00:26:38.547 Status Code: 0x2 00:26:38.547 Status Code Type: 0x0 00:26:38.547 Do Not Retry: 1 00:26:38.547 Error Location: 0x28 00:26:38.547 LBA: 0x0 00:26:38.547 Namespace: 0x0 00:26:38.547 Vendor Log Page: 0x0 00:26:38.547 ----------- 00:26:38.547 Entry: 1 00:26:38.547 Error Count: 0x2 00:26:38.547 Submission Queue Id: 0x0 00:26:38.547 Command Id: 0x5 00:26:38.547 Phase Bit: 0 00:26:38.547 Status Code: 0x2 00:26:38.547 Status Code Type: 0x0 00:26:38.547 Do Not Retry: 1 00:26:38.547 Error Location: 0x28 00:26:38.547 LBA: 0x0 00:26:38.547 Namespace: 0x0 00:26:38.547 Vendor Log Page: 0x0 00:26:38.547 ----------- 00:26:38.547 Entry: 2 00:26:38.547 Error Count: 0x1 00:26:38.547 Submission Queue Id: 0x0 00:26:38.547 Command Id: 0x4 00:26:38.547 Phase Bit: 0 00:26:38.547 Status Code: 0x2 00:26:38.547 Status Code Type: 0x0 00:26:38.547 Do Not Retry: 1 00:26:38.547 Error Location: 0x28 00:26:38.547 LBA: 0x0 00:26:38.547 Namespace: 0x0 00:26:38.547 Vendor Log Page: 0x0 00:26:38.547 00:26:38.547 Number of Queues 00:26:38.547 ================ 00:26:38.547 Number of I/O Submission Queues: 128 00:26:38.547 Number of I/O Completion Queues: 128 00:26:38.547 00:26:38.547 ZNS Specific Controller Data 00:26:38.547 ============================ 00:26:38.547 Zone Append Size Limit: 0 00:26:38.547 00:26:38.547 00:26:38.547 Active Namespaces 00:26:38.547 ================= 00:26:38.547 get_feature(0x05) failed 00:26:38.547 Namespace ID:1 00:26:38.547 Command Set Identifier: NVM (00h) 00:26:38.547 Deallocate: Supported 00:26:38.547 Deallocated/Unwritten Error: Not Supported 00:26:38.547 Deallocated Read Value: Unknown 00:26:38.547 Deallocate in Write Zeroes: Not Supported 00:26:38.547 Deallocated Guard Field: 0xFFFF 00:26:38.547 Flush: Supported 00:26:38.547 Reservation: Not Supported 00:26:38.547 Namespace Sharing Capabilities: Multiple Controllers 00:26:38.547 Size (in LBAs): 3750748848 (1788GiB) 00:26:38.547 Capacity (in LBAs): 3750748848 (1788GiB) 00:26:38.547 Utilization (in LBAs): 3750748848 (1788GiB) 00:26:38.547 UUID: 86168269-d366-43ec-a42c-98b3d602180c 00:26:38.547 Thin Provisioning: Not Supported 00:26:38.547 Per-NS Atomic Units: Yes 00:26:38.547 Atomic Write Unit (Normal): 8 00:26:38.547 Atomic Write Unit (PFail): 8 00:26:38.547 Preferred Write Granularity: 8 00:26:38.547 Atomic Compare & Write Unit: 8 00:26:38.547 Atomic Boundary Size (Normal): 0 00:26:38.547 Atomic Boundary Size (PFail): 0 00:26:38.547 Atomic Boundary Offset: 0 00:26:38.547 NGUID/EUI64 Never Reused: No 00:26:38.547 ANA group ID: 1 00:26:38.547 Namespace Write Protected: No 00:26:38.547 Number of LBA Formats: 1 00:26:38.547 Current LBA Format: LBA Format #00 00:26:38.547 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:38.547 00:26:38.547 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:38.547 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:38.547 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:38.547 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:38.547 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:38.547 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:38.547 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:38.547 rmmod nvme_tcp 00:26:38.547 rmmod nvme_fabrics 00:26:38.808 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:38.808 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:38.808 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:38.808 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:38.808 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:38.808 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:38.808 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:38.808 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:38.808 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:26:38.808 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:38.808 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:38.808 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:38.808 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:38.808 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.808 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.808 14:13:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.717 14:13:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:40.717 14:13:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:40.717 14:13:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:40.717 14:13:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:26:40.717 14:13:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:40.717 14:13:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:40.717 14:13:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:40.717 14:13:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:40.717 14:13:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:40.717 14:13:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:40.979 14:13:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:44.291 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:44.291 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:44.291 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:44.291 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:44.291 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:44.291 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:44.291 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:44.291 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:44.552 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:44.552 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:44.552 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:44.552 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:44.552 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:44.552 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:44.552 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:44.552 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:44.552 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:26:44.813 00:26:44.813 real 0m19.765s 00:26:44.813 user 0m5.423s 00:26:44.813 sys 0m11.324s 00:26:44.813 14:13:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:44.813 14:13:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:44.813 ************************************ 00:26:44.813 END TEST nvmf_identify_kernel_target 00:26:44.813 ************************************ 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.075 ************************************ 00:26:45.075 START TEST nvmf_auth_host 00:26:45.075 ************************************ 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:45.075 * Looking for test storage... 00:26:45.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:45.075 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:45.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.337 --rc genhtml_branch_coverage=1 00:26:45.337 --rc genhtml_function_coverage=1 00:26:45.337 --rc genhtml_legend=1 00:26:45.337 --rc geninfo_all_blocks=1 00:26:45.337 --rc geninfo_unexecuted_blocks=1 00:26:45.337 00:26:45.337 ' 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:45.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.337 --rc genhtml_branch_coverage=1 00:26:45.337 --rc genhtml_function_coverage=1 00:26:45.337 --rc genhtml_legend=1 00:26:45.337 --rc geninfo_all_blocks=1 00:26:45.337 --rc geninfo_unexecuted_blocks=1 00:26:45.337 00:26:45.337 ' 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:45.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.337 --rc genhtml_branch_coverage=1 00:26:45.337 --rc genhtml_function_coverage=1 00:26:45.337 --rc genhtml_legend=1 00:26:45.337 --rc geninfo_all_blocks=1 00:26:45.337 --rc geninfo_unexecuted_blocks=1 00:26:45.337 00:26:45.337 ' 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:45.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.337 --rc genhtml_branch_coverage=1 00:26:45.337 --rc genhtml_function_coverage=1 00:26:45.337 --rc genhtml_legend=1 00:26:45.337 --rc geninfo_all_blocks=1 00:26:45.337 --rc geninfo_unexecuted_blocks=1 00:26:45.337 00:26:45.337 ' 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.337 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:45.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:45.338 14:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:53.480 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:53.480 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:53.480 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:53.480 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:53.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:53.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:26:53.480 00:26:53.480 --- 10.0.0.2 ping statistics --- 00:26:53.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.480 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:53.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:53.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:26:53.480 00:26:53.480 --- 10.0.0.1 ping statistics --- 00:26:53.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.480 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1174558 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1174558 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1174558 ']' 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:53.480 14:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b3eca680ae3b834e3cf70b33a5bcf344 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.n2a 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b3eca680ae3b834e3cf70b33a5bcf344 0 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b3eca680ae3b834e3cf70b33a5bcf344 0 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b3eca680ae3b834e3cf70b33a5bcf344 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.n2a 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.n2a 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.n2a 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=795c20d083199987bb11a95c382f7b9aa467c513e1d4708643a3698917fa527e 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.wqY 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 795c20d083199987bb11a95c382f7b9aa467c513e1d4708643a3698917fa527e 3 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 795c20d083199987bb11a95c382f7b9aa467c513e1d4708643a3698917fa527e 3 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=795c20d083199987bb11a95c382f7b9aa467c513e1d4708643a3698917fa527e 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:53.741 14:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:53.741 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.wqY 00:26:53.741 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.wqY 00:26:53.741 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.wqY 00:26:53.741 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:53.741 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:53.741 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:53.741 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:53.741 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:53.741 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:53.741 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:53.741 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bceb6f61e6ceb34a68e7eb4d54c9b1d54452ad7c9f8a4d81 00:26:53.741 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:53.741 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.nlx 00:26:53.741 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bceb6f61e6ceb34a68e7eb4d54c9b1d54452ad7c9f8a4d81 0 00:26:53.741 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bceb6f61e6ceb34a68e7eb4d54c9b1d54452ad7c9f8a4d81 0 00:26:53.741 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:53.741 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:53.741 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bceb6f61e6ceb34a68e7eb4d54c9b1d54452ad7c9f8a4d81 00:26:53.741 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:53.741 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.nlx 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.nlx 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.nlx 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3a5669f300802de52565ae9fa14bf6498118ffa3f0018ebf 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.w5h 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3a5669f300802de52565ae9fa14bf6498118ffa3f0018ebf 2 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3a5669f300802de52565ae9fa14bf6498118ffa3f0018ebf 2 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3a5669f300802de52565ae9fa14bf6498118ffa3f0018ebf 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.w5h 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.w5h 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.w5h 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c28ce4e0b9ec3ac9a074cd3460e9e5d4 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.pZb 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c28ce4e0b9ec3ac9a074cd3460e9e5d4 1 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c28ce4e0b9ec3ac9a074cd3460e9e5d4 1 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:54.003 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c28ce4e0b9ec3ac9a074cd3460e9e5d4 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.pZb 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.pZb 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.pZb 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=436993ed3c0c9c219c4c655d8a942fdb 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ff3 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 436993ed3c0c9c219c4c655d8a942fdb 1 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 436993ed3c0c9c219c4c655d8a942fdb 1 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=436993ed3c0c9c219c4c655d8a942fdb 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ff3 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ff3 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ff3 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=443e2d283ec629b08b29fcd473cfa894c2ad6aa2bf934686 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.xnp 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 443e2d283ec629b08b29fcd473cfa894c2ad6aa2bf934686 2 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 443e2d283ec629b08b29fcd473cfa894c2ad6aa2bf934686 2 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=443e2d283ec629b08b29fcd473cfa894c2ad6aa2bf934686 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:54.004 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.xnp 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.xnp 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.xnp 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=46b91514ae03591ce95965ce4d5fea63 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.8ak 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 46b91514ae03591ce95965ce4d5fea63 0 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 46b91514ae03591ce95965ce4d5fea63 0 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=46b91514ae03591ce95965ce4d5fea63 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.8ak 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.8ak 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.8ak 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bbda66e80129af86df3ec87710f7b109b1bf1bf2c9bf1eb4d2a87810ad259eed 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.xwj 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bbda66e80129af86df3ec87710f7b109b1bf1bf2c9bf1eb4d2a87810ad259eed 3 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bbda66e80129af86df3ec87710f7b109b1bf1bf2c9bf1eb4d2a87810ad259eed 3 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bbda66e80129af86df3ec87710f7b109b1bf1bf2c9bf1eb4d2a87810ad259eed 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.xwj 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.xwj 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.xwj 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1174558 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1174558 ']' 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:54.266 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.n2a 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.wqY ]] 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wqY 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.nlx 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.w5h ]] 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.w5h 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.pZb 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ff3 ]] 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ff3 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.xnp 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.8ak ]] 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.8ak 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.xwj 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:54.528 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:54.788 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:54.788 14:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:58.083 Waiting for block devices as requested 00:26:58.083 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:58.083 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:58.083 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:58.344 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:58.344 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:58.344 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:58.605 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:58.605 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:58.605 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:58.864 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:58.864 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:59.125 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:59.125 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:59.125 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:59.125 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:59.386 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:59.386 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:00.362 No valid GPT data, bailing 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:00.362 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:00.362 00:27:00.362 Discovery Log Number of Records 2, Generation counter 2 00:27:00.362 =====Discovery Log Entry 0====== 00:27:00.362 trtype: tcp 00:27:00.363 adrfam: ipv4 00:27:00.363 subtype: current discovery subsystem 00:27:00.363 treq: not specified, sq flow control disable supported 00:27:00.363 portid: 1 00:27:00.363 trsvcid: 4420 00:27:00.363 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:00.363 traddr: 10.0.0.1 00:27:00.363 eflags: none 00:27:00.363 sectype: none 00:27:00.363 =====Discovery Log Entry 1====== 00:27:00.363 trtype: tcp 00:27:00.363 adrfam: ipv4 00:27:00.363 subtype: nvme subsystem 00:27:00.363 treq: not specified, sq flow control disable supported 00:27:00.363 portid: 1 00:27:00.363 trsvcid: 4420 00:27:00.363 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:00.363 traddr: 10.0.0.1 00:27:00.363 eflags: none 00:27:00.363 sectype: none 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: ]] 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.363 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.624 nvme0n1 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: ]] 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.624 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.884 nvme0n1 00:27:00.884 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.884 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.884 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.884 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.884 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.884 14:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: ]] 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.884 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.146 nvme0n1 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: ]] 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.146 nvme0n1 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.146 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.417 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.417 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.417 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.417 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.417 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.417 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.417 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.417 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:01.417 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.417 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.417 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.417 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:01.417 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:01.417 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:01.417 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.417 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: ]] 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.418 nvme0n1 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.418 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.679 nvme0n1 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: ]] 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.679 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.941 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.941 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.941 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.941 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.941 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.941 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.941 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.941 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.941 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.941 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.941 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.941 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.941 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:01.941 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.941 14:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.941 nvme0n1 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: ]] 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.941 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.203 nvme0n1 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: ]] 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:02.203 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.204 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.204 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:02.204 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:02.204 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.204 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:02.204 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.204 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.466 nvme0n1 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: ]] 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.466 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.729 nvme0n1 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.729 14:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.729 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.729 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.729 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:02.729 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.729 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.729 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:02.729 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:02.729 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:02.729 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:02.729 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.729 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:02.729 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:02.729 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:02.729 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:02.729 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.729 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.729 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:02.729 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:02.729 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.729 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:02.729 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.729 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.992 nvme0n1 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: ]] 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.992 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.254 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.254 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.254 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.254 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.254 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.254 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.254 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.254 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.254 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.254 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.254 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.254 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.254 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:03.254 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.254 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.516 nvme0n1 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: ]] 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:03.516 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.517 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.517 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:03.517 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:03.517 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.517 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:03.517 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.517 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.517 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.517 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.517 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.517 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.517 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.517 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.517 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.517 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.517 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.517 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.517 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.517 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.517 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:03.517 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.517 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.779 nvme0n1 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: ]] 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.779 14:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.043 nvme0n1 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: ]] 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.043 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.306 nvme0n1 00:27:04.306 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.306 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.306 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.306 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.306 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.306 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.567 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.828 nvme0n1 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: ]] 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.828 14:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.399 nvme0n1 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: ]] 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.399 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.660 nvme0n1 00:27:05.660 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.660 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.660 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.660 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.660 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.660 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.660 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.660 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.660 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.660 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: ]] 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.921 14:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.184 nvme0n1 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: ]] 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:06.184 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:06.185 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:06.185 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.185 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.756 nvme0n1 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.756 14:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.329 nvme0n1 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: ]] 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.329 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:07.330 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:07.330 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.330 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:07.330 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.330 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.330 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.330 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.330 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.330 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.330 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.330 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.330 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.330 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.330 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.330 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.330 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.330 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.330 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:07.330 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.330 14:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.903 nvme0n1 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: ]] 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:07.903 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.904 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.904 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:07.904 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:07.904 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.904 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:07.904 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.904 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.904 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.904 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.904 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.904 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.904 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.904 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.904 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.904 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.904 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.904 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.904 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.904 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.904 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:07.904 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.904 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.848 nvme0n1 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: ]] 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.848 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:08.849 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.849 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.849 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.849 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.849 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:08.849 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:08.849 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:08.849 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.849 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.849 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:08.849 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.849 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:08.849 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:08.849 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:08.849 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:08.849 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.849 14:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.421 nvme0n1 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: ]] 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.421 14:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.994 nvme0n1 00:27:09.994 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.994 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.994 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.994 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.994 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.256 14:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.828 nvme0n1 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: ]] 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:10.828 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:10.829 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.829 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:10.829 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.829 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.829 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.829 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.829 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:10.829 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.829 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.829 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.829 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.829 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.829 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.829 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.829 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.829 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:10.829 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:10.829 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.829 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.090 nvme0n1 00:27:11.090 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.090 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.090 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.090 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.090 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.090 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: ]] 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.091 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.352 nvme0n1 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: ]] 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.352 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.613 nvme0n1 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: ]] 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.613 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.873 nvme0n1 00:27:11.873 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.873 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.873 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.873 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.873 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.873 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.873 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.873 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.873 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.873 14:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.873 nvme0n1 00:27:11.873 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.134 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.134 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.134 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.134 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.134 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.134 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.134 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.134 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.134 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.134 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.134 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:12.134 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.134 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:12.134 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.134 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.134 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:12.134 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:12.134 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:12.134 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:12.134 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.134 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:12.134 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:12.134 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: ]] 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.135 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.135 nvme0n1 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: ]] 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.396 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.657 nvme0n1 00:27:12.657 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.657 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.657 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.657 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.657 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.657 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.657 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.657 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.657 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.657 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.657 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.657 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.657 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:12.657 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.657 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.657 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:12.657 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:12.657 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:12.657 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:12.657 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.657 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:12.657 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:12.657 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: ]] 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.658 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.919 nvme0n1 00:27:12.919 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.919 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.919 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.919 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.919 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.919 14:14:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.919 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.919 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.919 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.919 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.919 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.919 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.919 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: ]] 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.920 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.181 nvme0n1 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.181 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.443 nvme0n1 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: ]] 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.443 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.705 nvme0n1 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: ]] 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.705 14:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.966 nvme0n1 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: ]] 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.966 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.967 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.967 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.967 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.967 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.967 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:13.967 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.967 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.228 nvme0n1 00:27:14.228 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.228 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.228 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.228 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.228 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.228 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: ]] 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.489 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.761 nvme0n1 00:27:14.761 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.762 14:14:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.040 nvme0n1 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: ]] 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.040 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.695 nvme0n1 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: ]] 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.695 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:15.696 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:15.696 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.696 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:15.696 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.696 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.696 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.696 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.696 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.696 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.696 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.696 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.696 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.696 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:15.696 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.696 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:15.696 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:15.696 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:15.696 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:15.696 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.696 14:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.986 nvme0n1 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: ]] 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:15.986 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:15.987 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:15.987 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:15.987 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.987 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.558 nvme0n1 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: ]] 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.558 14:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.129 nvme0n1 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.129 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.389 nvme0n1 00:27:17.389 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.389 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.389 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.389 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.389 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: ]] 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.650 14:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.223 nvme0n1 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: ]] 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.223 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:18.224 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.224 14:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.165 nvme0n1 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: ]] 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.165 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.735 nvme0n1 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: ]] 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:19.735 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.736 14:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.304 nvme0n1 00:27:20.304 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.304 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.304 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.304 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.304 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.304 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.564 14:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.133 nvme0n1 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: ]] 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.133 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.393 nvme0n1 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: ]] 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.393 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:21.394 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.394 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.655 nvme0n1 00:27:21.655 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: ]] 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.656 14:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.916 nvme0n1 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: ]] 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.916 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.177 nvme0n1 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.177 nvme0n1 00:27:22.177 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.438 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.438 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.438 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.438 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.438 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.438 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.438 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.438 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.438 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.438 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.438 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:22.438 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.438 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:22.438 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.438 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.438 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: ]] 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.439 nvme0n1 00:27:22.439 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: ]] 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.700 14:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.961 nvme0n1 00:27:22.961 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.961 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.961 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.961 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.961 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.961 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.961 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.961 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.961 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.961 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.961 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.961 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.961 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:22.961 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.961 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.961 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:22.961 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:22.961 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:22.961 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:22.961 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.961 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:22.961 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: ]] 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.962 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.223 nvme0n1 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: ]] 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.223 nvme0n1 00:27:23.223 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.484 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.745 nvme0n1 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: ]] 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.745 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.746 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.746 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.746 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.746 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.746 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.746 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.746 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.746 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.746 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.746 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.746 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.746 14:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.007 nvme0n1 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: ]] 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.007 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.268 nvme0n1 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: ]] 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.268 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.269 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.269 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.269 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.269 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:24.269 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.269 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.529 nvme0n1 00:27:24.529 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.529 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.529 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.529 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.529 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.529 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.789 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.789 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.789 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.789 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.789 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.789 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.789 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:24.789 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.789 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.789 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:24.789 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:24.789 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:24.789 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:24.789 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.789 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:24.789 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:24.789 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: ]] 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.790 14:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.050 nvme0n1 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.050 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.310 nvme0n1 00:27:25.310 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.310 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.310 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.310 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.310 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.310 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.310 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.310 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.310 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.310 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.310 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.310 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:25.310 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: ]] 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.311 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.884 nvme0n1 00:27:25.884 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.884 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.884 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.884 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.884 14:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: ]] 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.884 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.455 nvme0n1 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: ]] 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:26.455 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:26.456 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.456 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:26.456 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.456 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.456 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.456 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.456 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.456 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.456 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.456 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.456 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.456 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.456 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.456 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.456 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.456 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.456 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.456 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.456 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.716 nvme0n1 00:27:26.716 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.716 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.716 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.716 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.716 14:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.716 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: ]] 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.977 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.237 nvme0n1 00:27:27.237 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.237 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.237 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.237 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.237 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.237 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.237 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.237 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.237 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.237 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.498 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.759 nvme0n1 00:27:27.759 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.759 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.759 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.759 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.759 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.759 14:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNlY2E2ODBhZTNiODM0ZTNjZjcwYjMzYTViY2YzNDS3Nce/: 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: ]] 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzk1YzIwZDA4MzE5OTk4N2JiMTFhOTVjMzgyZjdiOWFhNDY3YzUxM2UxZDQ3MDg2NDNhMzY5ODkxN2ZhNTI3ZWF1apg=: 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.759 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.020 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.020 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.020 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.020 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.020 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.020 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.020 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.020 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.020 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.020 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.020 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.020 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.020 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:28.020 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.020 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.591 nvme0n1 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: ]] 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.591 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.592 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.592 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.592 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.592 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.592 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.592 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.592 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.592 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.592 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.592 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.592 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.592 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.592 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:28.592 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.592 14:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.162 nvme0n1 00:27:29.162 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.162 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.162 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.162 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.162 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.162 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.422 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.422 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.422 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: ]] 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.423 14:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.993 nvme0n1 00:27:29.993 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.993 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.993 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.993 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.993 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.993 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.993 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.993 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.993 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.993 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.993 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.993 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.993 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:29.993 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.993 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.993 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:29.993 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:29.993 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:29.993 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:29.993 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDQzZTJkMjgzZWM2MjliMDhiMjlmY2Q0NzNjZmE4OTRjMmFkNmFhMmJmOTM0Njg2uhXDlg==: 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: ]] 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZiOTE1MTRhZTAzNTkxY2U5NTk2NWNlNGQ1ZmVhNjMIErLe: 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.994 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.565 nvme0n1 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmJkYTY2ZTgwMTI5YWY4NmRmM2VjODc3MTBmN2IxMDliMWJmMWJmMmM5YmYxZWI0ZDJhODc4MTBhZDI1OWVlZIYmL80=: 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.826 14:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.396 nvme0n1 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: ]] 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:31.396 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.397 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.397 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.397 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.397 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.397 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.397 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.397 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.397 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.397 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.397 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:31.397 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:31.397 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:31.397 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:31.397 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:31.397 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:31.397 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:31.397 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:31.397 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.397 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.656 request: 00:27:31.656 { 00:27:31.656 "name": "nvme0", 00:27:31.656 "trtype": "tcp", 00:27:31.656 "traddr": "10.0.0.1", 00:27:31.656 "adrfam": "ipv4", 00:27:31.656 "trsvcid": "4420", 00:27:31.656 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:31.656 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:31.656 "prchk_reftag": false, 00:27:31.656 "prchk_guard": false, 00:27:31.656 "hdgst": false, 00:27:31.656 "ddgst": false, 00:27:31.656 "allow_unrecognized_csi": false, 00:27:31.656 "method": "bdev_nvme_attach_controller", 00:27:31.656 "req_id": 1 00:27:31.656 } 00:27:31.656 Got JSON-RPC error response 00:27:31.656 response: 00:27:31.656 { 00:27:31.656 "code": -5, 00:27:31.656 "message": "Input/output error" 00:27:31.656 } 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.656 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.656 request: 00:27:31.656 { 00:27:31.656 "name": "nvme0", 00:27:31.656 "trtype": "tcp", 00:27:31.656 "traddr": "10.0.0.1", 00:27:31.656 "adrfam": "ipv4", 00:27:31.656 "trsvcid": "4420", 00:27:31.656 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:31.656 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:31.656 "prchk_reftag": false, 00:27:31.656 "prchk_guard": false, 00:27:31.656 "hdgst": false, 00:27:31.656 "ddgst": false, 00:27:31.656 "dhchap_key": "key2", 00:27:31.656 "allow_unrecognized_csi": false, 00:27:31.656 "method": "bdev_nvme_attach_controller", 00:27:31.656 "req_id": 1 00:27:31.656 } 00:27:31.656 Got JSON-RPC error response 00:27:31.656 response: 00:27:31.656 { 00:27:31.656 "code": -5, 00:27:31.656 "message": "Input/output error" 00:27:31.656 } 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.657 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.916 request: 00:27:31.916 { 00:27:31.916 "name": "nvme0", 00:27:31.916 "trtype": "tcp", 00:27:31.916 "traddr": "10.0.0.1", 00:27:31.916 "adrfam": "ipv4", 00:27:31.916 "trsvcid": "4420", 00:27:31.916 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:31.916 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:31.916 "prchk_reftag": false, 00:27:31.916 "prchk_guard": false, 00:27:31.916 "hdgst": false, 00:27:31.916 "ddgst": false, 00:27:31.916 "dhchap_key": "key1", 00:27:31.916 "dhchap_ctrlr_key": "ckey2", 00:27:31.917 "allow_unrecognized_csi": false, 00:27:31.917 "method": "bdev_nvme_attach_controller", 00:27:31.917 "req_id": 1 00:27:31.917 } 00:27:31.917 Got JSON-RPC error response 00:27:31.917 response: 00:27:31.917 { 00:27:31.917 "code": -5, 00:27:31.917 "message": "Input/output error" 00:27:31.917 } 00:27:31.917 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:31.917 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:31.917 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:31.917 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:31.917 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:31.917 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:31.917 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.917 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.917 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.917 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.917 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.917 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.917 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.917 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.917 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.917 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.917 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:31.917 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.917 14:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.917 nvme0n1 00:27:31.917 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.917 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:31.917 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.917 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.917 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.917 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.917 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:31.917 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:31.917 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.917 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.917 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:31.917 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: ]] 00:27:31.917 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:31.917 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.917 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.917 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.917 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.917 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.917 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:31.917 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.917 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.177 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.177 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.177 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:32.177 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:32.177 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:32.177 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:32.177 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:32.177 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:32.177 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:32.177 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:32.177 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.177 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.177 request: 00:27:32.177 { 00:27:32.177 "name": "nvme0", 00:27:32.177 "dhchap_key": "key1", 00:27:32.177 "dhchap_ctrlr_key": "ckey2", 00:27:32.177 "method": "bdev_nvme_set_keys", 00:27:32.177 "req_id": 1 00:27:32.177 } 00:27:32.177 Got JSON-RPC error response 00:27:32.177 response: 00:27:32.177 { 00:27:32.177 "code": -13, 00:27:32.177 "message": "Permission denied" 00:27:32.177 } 00:27:32.177 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:32.177 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:32.177 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:32.177 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:32.177 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:32.177 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.177 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:32.177 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.177 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.177 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.177 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:32.177 14:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:33.118 14:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.118 14:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:33.118 14:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.118 14:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.118 14:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.380 14:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:33.380 14:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:34.324 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.324 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:34.324 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.324 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.324 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.324 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:34.324 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:34.324 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.324 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.324 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:34.324 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:34.324 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:34.324 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:34.324 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.325 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:34.325 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlYjZmNjFlNmNlYjM0YTY4ZTdlYjRkNTRjOWIxZDU0NDUyYWQ3YzlmOGE0ZDgx1o37sg==: 00:27:34.325 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: ]] 00:27:34.325 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2E1NjY5ZjMwMDgwMmRlNTI1NjVhZTlmYTE0YmY2NDk4MTE4ZmZhM2YwMDE4ZWJmQW1UyA==: 00:27:34.325 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:34.325 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.325 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.325 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.325 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.325 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.325 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.325 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.325 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.325 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.325 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.325 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:34.325 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.325 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.586 nvme0n1 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzI4Y2U0ZTBiOWVjM2FjOWEwNzRjZDM0NjBlOWU1ZDRdqutO: 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: ]] 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2OTkzZWQzYzBjOWMyMTljNGM2NTVkOGE5NDJmZGJNUYdO: 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.586 request: 00:27:34.586 { 00:27:34.586 "name": "nvme0", 00:27:34.586 "dhchap_key": "key2", 00:27:34.586 "dhchap_ctrlr_key": "ckey1", 00:27:34.586 "method": "bdev_nvme_set_keys", 00:27:34.586 "req_id": 1 00:27:34.586 } 00:27:34.586 Got JSON-RPC error response 00:27:34.586 response: 00:27:34.586 { 00:27:34.586 "code": -13, 00:27:34.586 "message": "Permission denied" 00:27:34.586 } 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:34.586 14:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:35.598 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.598 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:35.598 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.598 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.598 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.598 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:35.598 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:35.598 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:35.598 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:35.598 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:35.598 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:35.598 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:35.598 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:35.598 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:35.598 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:35.598 rmmod nvme_tcp 00:27:35.859 rmmod nvme_fabrics 00:27:35.859 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:35.859 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:35.859 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:35.859 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1174558 ']' 00:27:35.859 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1174558 00:27:35.859 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1174558 ']' 00:27:35.859 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1174558 00:27:35.859 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:27:35.859 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:35.859 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1174558 00:27:35.859 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:35.859 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:35.859 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1174558' 00:27:35.859 killing process with pid 1174558 00:27:35.859 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1174558 00:27:35.859 14:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1174558 00:27:35.859 14:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:35.859 14:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:35.859 14:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:35.859 14:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:35.859 14:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:35.860 14:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:35.860 14:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:35.860 14:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:35.860 14:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:35.860 14:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.860 14:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.860 14:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.407 14:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:38.407 14:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:38.407 14:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:38.407 14:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:38.407 14:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:38.407 14:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:27:38.407 14:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:38.407 14:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:38.407 14:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:38.407 14:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:38.407 14:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:38.407 14:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:38.407 14:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:41.713 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:41.713 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:41.713 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:41.713 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:41.713 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:41.713 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:41.713 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:41.713 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:41.713 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:41.713 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:41.713 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:41.713 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:41.713 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:41.714 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:41.714 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:41.714 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:41.714 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:41.974 14:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.n2a /tmp/spdk.key-null.nlx /tmp/spdk.key-sha256.pZb /tmp/spdk.key-sha384.xnp /tmp/spdk.key-sha512.xwj /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:41.974 14:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:46.181 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:46.181 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:46.181 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:46.181 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:46.181 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:46.181 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:46.181 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:46.181 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:46.181 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:46.181 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:27:46.181 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:46.181 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:46.181 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:46.181 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:46.181 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:46.181 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:46.181 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:46.181 00:27:46.181 real 1m0.838s 00:27:46.181 user 0m54.691s 00:27:46.181 sys 0m16.013s 00:27:46.181 14:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:46.181 14:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.181 ************************************ 00:27:46.181 END TEST nvmf_auth_host 00:27:46.181 ************************************ 00:27:46.181 14:14:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:46.181 14:14:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:46.181 14:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:46.181 14:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:46.181 14:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.181 ************************************ 00:27:46.181 START TEST nvmf_digest 00:27:46.181 ************************************ 00:27:46.181 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:46.181 * Looking for test storage... 00:27:46.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:46.181 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:46.181 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:46.181 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:27:46.181 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:46.181 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:46.181 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:46.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.182 --rc genhtml_branch_coverage=1 00:27:46.182 --rc genhtml_function_coverage=1 00:27:46.182 --rc genhtml_legend=1 00:27:46.182 --rc geninfo_all_blocks=1 00:27:46.182 --rc geninfo_unexecuted_blocks=1 00:27:46.182 00:27:46.182 ' 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:46.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.182 --rc genhtml_branch_coverage=1 00:27:46.182 --rc genhtml_function_coverage=1 00:27:46.182 --rc genhtml_legend=1 00:27:46.182 --rc geninfo_all_blocks=1 00:27:46.182 --rc geninfo_unexecuted_blocks=1 00:27:46.182 00:27:46.182 ' 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:46.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.182 --rc genhtml_branch_coverage=1 00:27:46.182 --rc genhtml_function_coverage=1 00:27:46.182 --rc genhtml_legend=1 00:27:46.182 --rc geninfo_all_blocks=1 00:27:46.182 --rc geninfo_unexecuted_blocks=1 00:27:46.182 00:27:46.182 ' 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:46.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.182 --rc genhtml_branch_coverage=1 00:27:46.182 --rc genhtml_function_coverage=1 00:27:46.182 --rc genhtml_legend=1 00:27:46.182 --rc geninfo_all_blocks=1 00:27:46.182 --rc geninfo_unexecuted_blocks=1 00:27:46.182 00:27:46.182 ' 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:46.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:46.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:54.327 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:54.327 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:54.327 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:54.327 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:54.327 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:54.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:54.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:27:54.328 00:27:54.328 --- 10.0.0.2 ping statistics --- 00:27:54.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.328 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:54.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:54.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:27:54.328 00:27:54.328 --- 10.0.0.1 ping statistics --- 00:27:54.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.328 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:54.328 ************************************ 00:27:54.328 START TEST nvmf_digest_clean 00:27:54.328 ************************************ 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1191505 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1191505 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1191505 ']' 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:54.328 14:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:54.328 [2024-10-30 14:14:51.921378] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:27:54.328 [2024-10-30 14:14:51.921442] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:54.328 [2024-10-30 14:14:52.018601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.328 [2024-10-30 14:14:52.069501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:54.328 [2024-10-30 14:14:52.069552] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:54.328 [2024-10-30 14:14:52.069562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:54.328 [2024-10-30 14:14:52.069569] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:54.328 [2024-10-30 14:14:52.069575] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:54.328 [2024-10-30 14:14:52.070386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.590 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:54.590 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:54.590 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:54.590 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:54.590 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:54.590 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:54.590 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:54.590 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:54.590 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:54.590 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.590 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:54.590 null0 00:27:54.590 [2024-10-30 14:14:52.873490] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:54.851 [2024-10-30 14:14:52.897791] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:54.851 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.851 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:54.851 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:54.851 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:54.851 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:54.851 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:54.851 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:54.851 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:54.851 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1191839 00:27:54.851 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1191839 /var/tmp/bperf.sock 00:27:54.851 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1191839 ']' 00:27:54.851 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:54.851 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:54.851 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:54.851 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:54.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:54.851 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:54.851 14:14:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:54.851 [2024-10-30 14:14:52.959734] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:27:54.851 [2024-10-30 14:14:52.959806] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1191839 ] 00:27:54.851 [2024-10-30 14:14:53.051886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.851 [2024-10-30 14:14:53.103619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.795 14:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:55.795 14:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:55.795 14:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:55.795 14:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:55.795 14:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:55.795 14:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:55.795 14:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:56.057 nvme0n1 00:27:56.057 14:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:56.057 14:14:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:56.318 Running I/O for 2 seconds... 00:27:58.207 18190.00 IOPS, 71.05 MiB/s [2024-10-30T13:14:56.506Z] 19132.00 IOPS, 74.73 MiB/s 00:27:58.207 Latency(us) 00:27:58.207 [2024-10-30T13:14:56.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.207 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:58.207 nvme0n1 : 2.01 19151.69 74.81 0.00 0.00 6675.73 3072.00 18896.21 00:27:58.207 [2024-10-30T13:14:56.506Z] =================================================================================================================== 00:27:58.207 [2024-10-30T13:14:56.506Z] Total : 19151.69 74.81 0.00 0.00 6675.73 3072.00 18896.21 00:27:58.207 { 00:27:58.207 "results": [ 00:27:58.207 { 00:27:58.207 "job": "nvme0n1", 00:27:58.207 "core_mask": "0x2", 00:27:58.207 "workload": "randread", 00:27:58.207 "status": "finished", 00:27:58.207 "queue_depth": 128, 00:27:58.207 "io_size": 4096, 00:27:58.207 "runtime": 2.006142, 00:27:58.207 "iops": 19151.685174828104, 00:27:58.207 "mibps": 74.81127021417228, 00:27:58.207 "io_failed": 0, 00:27:58.207 "io_timeout": 0, 00:27:58.207 "avg_latency_us": 6675.7271672609595, 00:27:58.207 "min_latency_us": 3072.0, 00:27:58.207 "max_latency_us": 18896.213333333333 00:27:58.207 } 00:27:58.207 ], 00:27:58.207 "core_count": 1 00:27:58.207 } 00:27:58.207 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:58.207 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:58.207 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:58.207 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:58.207 | select(.opcode=="crc32c") 00:27:58.207 | "\(.module_name) \(.executed)"' 00:27:58.207 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:58.468 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:58.468 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:58.468 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:58.468 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:58.468 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1191839 00:27:58.468 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1191839 ']' 00:27:58.468 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1191839 00:27:58.468 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:58.468 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:58.468 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1191839 00:27:58.468 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:58.468 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:58.468 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1191839' 00:27:58.468 killing process with pid 1191839 00:27:58.468 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1191839 00:27:58.469 Received shutdown signal, test time was about 2.000000 seconds 00:27:58.469 00:27:58.469 Latency(us) 00:27:58.469 [2024-10-30T13:14:56.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.469 [2024-10-30T13:14:56.768Z] =================================================================================================================== 00:27:58.469 [2024-10-30T13:14:56.768Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:58.469 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1191839 00:27:58.730 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:58.730 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:58.730 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:58.730 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:58.730 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:58.730 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:58.730 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:58.730 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1192522 00:27:58.730 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1192522 /var/tmp/bperf.sock 00:27:58.730 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1192522 ']' 00:27:58.730 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:58.730 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:58.730 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:58.730 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:58.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:58.730 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:58.730 14:14:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:58.730 [2024-10-30 14:14:56.865487] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:27:58.730 [2024-10-30 14:14:56.865560] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1192522 ] 00:27:58.730 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:58.730 Zero copy mechanism will not be used. 00:27:58.730 [2024-10-30 14:14:56.951192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.730 [2024-10-30 14:14:56.980427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.672 14:14:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:59.672 14:14:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:59.672 14:14:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:59.672 14:14:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:59.672 14:14:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:59.672 14:14:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:59.672 14:14:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:59.933 nvme0n1 00:27:59.933 14:14:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:59.933 14:14:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:59.933 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:59.933 Zero copy mechanism will not be used. 00:27:59.933 Running I/O for 2 seconds... 00:28:02.255 4085.00 IOPS, 510.62 MiB/s [2024-10-30T13:15:00.554Z] 3896.00 IOPS, 487.00 MiB/s 00:28:02.255 Latency(us) 00:28:02.255 [2024-10-30T13:15:00.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.255 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:02.255 nvme0n1 : 2.00 3899.22 487.40 0.00 0.00 4101.09 699.73 7482.03 00:28:02.255 [2024-10-30T13:15:00.554Z] =================================================================================================================== 00:28:02.255 [2024-10-30T13:15:00.554Z] Total : 3899.22 487.40 0.00 0.00 4101.09 699.73 7482.03 00:28:02.255 { 00:28:02.255 "results": [ 00:28:02.255 { 00:28:02.255 "job": "nvme0n1", 00:28:02.255 "core_mask": "0x2", 00:28:02.255 "workload": "randread", 00:28:02.255 "status": "finished", 00:28:02.255 "queue_depth": 16, 00:28:02.255 "io_size": 131072, 00:28:02.255 "runtime": 2.002453, 00:28:02.255 "iops": 3899.2176096018234, 00:28:02.255 "mibps": 487.4022012002279, 00:28:02.255 "io_failed": 0, 00:28:02.255 "io_timeout": 0, 00:28:02.255 "avg_latency_us": 4101.08524590164, 00:28:02.255 "min_latency_us": 699.7333333333333, 00:28:02.255 "max_latency_us": 7482.026666666667 00:28:02.255 } 00:28:02.255 ], 00:28:02.255 "core_count": 1 00:28:02.255 } 00:28:02.255 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:02.255 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:02.255 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:02.255 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:02.255 | select(.opcode=="crc32c") 00:28:02.255 | "\(.module_name) \(.executed)"' 00:28:02.255 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:02.255 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:02.255 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:02.255 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:02.255 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:02.255 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1192522 00:28:02.255 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1192522 ']' 00:28:02.255 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1192522 00:28:02.255 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:02.255 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:02.255 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1192522 00:28:02.255 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:02.255 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:02.255 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1192522' 00:28:02.255 killing process with pid 1192522 00:28:02.255 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1192522 00:28:02.255 Received shutdown signal, test time was about 2.000000 seconds 00:28:02.255 00:28:02.255 Latency(us) 00:28:02.255 [2024-10-30T13:15:00.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.255 [2024-10-30T13:15:00.554Z] =================================================================================================================== 00:28:02.255 [2024-10-30T13:15:00.555Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:02.256 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1192522 00:28:02.516 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:02.516 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:02.516 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:02.516 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:02.516 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:02.516 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:02.516 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:02.516 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1193237 00:28:02.516 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1193237 /var/tmp/bperf.sock 00:28:02.516 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1193237 ']' 00:28:02.516 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:02.516 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:02.516 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:02.516 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:02.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:02.516 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:02.516 14:15:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:02.516 [2024-10-30 14:15:00.634792] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:28:02.516 [2024-10-30 14:15:00.634854] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193237 ] 00:28:02.516 [2024-10-30 14:15:00.715851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.516 [2024-10-30 14:15:00.744443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.461 14:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:03.461 14:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:03.461 14:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:03.461 14:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:03.461 14:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:03.461 14:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:03.461 14:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:03.722 nvme0n1 00:28:03.722 14:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:03.722 14:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:03.983 Running I/O for 2 seconds... 00:28:05.869 30236.00 IOPS, 118.11 MiB/s [2024-10-30T13:15:04.168Z] 30428.50 IOPS, 118.86 MiB/s 00:28:05.869 Latency(us) 00:28:05.869 [2024-10-30T13:15:04.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.869 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:05.869 nvme0n1 : 2.01 30432.26 118.88 0.00 0.00 4200.62 2020.69 15400.96 00:28:05.869 [2024-10-30T13:15:04.168Z] =================================================================================================================== 00:28:05.869 [2024-10-30T13:15:04.168Z] Total : 30432.26 118.88 0.00 0.00 4200.62 2020.69 15400.96 00:28:05.869 { 00:28:05.869 "results": [ 00:28:05.869 { 00:28:05.869 "job": "nvme0n1", 00:28:05.869 "core_mask": "0x2", 00:28:05.869 "workload": "randwrite", 00:28:05.869 "status": "finished", 00:28:05.869 "queue_depth": 128, 00:28:05.869 "io_size": 4096, 00:28:05.869 "runtime": 2.006062, 00:28:05.869 "iops": 30432.259820484112, 00:28:05.869 "mibps": 118.87601492376606, 00:28:05.869 "io_failed": 0, 00:28:05.869 "io_timeout": 0, 00:28:05.869 "avg_latency_us": 4200.624266190547, 00:28:05.869 "min_latency_us": 2020.6933333333334, 00:28:05.869 "max_latency_us": 15400.96 00:28:05.869 } 00:28:05.869 ], 00:28:05.869 "core_count": 1 00:28:05.869 } 00:28:05.869 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:05.869 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:05.869 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:05.869 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:05.869 | select(.opcode=="crc32c") 00:28:05.869 | "\(.module_name) \(.executed)"' 00:28:05.869 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:06.130 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:06.130 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:06.130 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:06.130 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:06.130 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1193237 00:28:06.130 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1193237 ']' 00:28:06.130 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1193237 00:28:06.130 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:06.130 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:06.130 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1193237 00:28:06.130 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:06.130 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:06.130 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1193237' 00:28:06.130 killing process with pid 1193237 00:28:06.130 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1193237 00:28:06.130 Received shutdown signal, test time was about 2.000000 seconds 00:28:06.130 00:28:06.131 Latency(us) 00:28:06.131 [2024-10-30T13:15:04.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.131 [2024-10-30T13:15:04.430Z] =================================================================================================================== 00:28:06.131 [2024-10-30T13:15:04.430Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:06.131 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1193237 00:28:06.392 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:06.392 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:06.392 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:06.392 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:06.392 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:06.392 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:06.392 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:06.392 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1194064 00:28:06.392 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1194064 /var/tmp/bperf.sock 00:28:06.392 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1194064 ']' 00:28:06.392 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:06.392 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:06.392 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:06.392 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:06.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:06.392 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:06.392 14:15:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:06.392 [2024-10-30 14:15:04.542340] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:28:06.392 [2024-10-30 14:15:04.542401] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194064 ] 00:28:06.392 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:06.392 Zero copy mechanism will not be used. 00:28:06.392 [2024-10-30 14:15:04.626388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.392 [2024-10-30 14:15:04.655965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.335 14:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:07.335 14:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:07.335 14:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:07.335 14:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:07.335 14:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:07.335 14:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:07.335 14:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:07.595 nvme0n1 00:28:07.595 14:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:07.595 14:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:07.855 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:07.855 Zero copy mechanism will not be used. 00:28:07.855 Running I/O for 2 seconds... 00:28:09.740 5055.00 IOPS, 631.88 MiB/s [2024-10-30T13:15:08.039Z] 5891.00 IOPS, 736.38 MiB/s 00:28:09.740 Latency(us) 00:28:09.740 [2024-10-30T13:15:08.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.740 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:09.740 nvme0n1 : 2.01 5885.12 735.64 0.00 0.00 2713.79 1126.40 11578.03 00:28:09.740 [2024-10-30T13:15:08.039Z] =================================================================================================================== 00:28:09.740 [2024-10-30T13:15:08.039Z] Total : 5885.12 735.64 0.00 0.00 2713.79 1126.40 11578.03 00:28:09.740 { 00:28:09.740 "results": [ 00:28:09.740 { 00:28:09.740 "job": "nvme0n1", 00:28:09.740 "core_mask": "0x2", 00:28:09.740 "workload": "randwrite", 00:28:09.740 "status": "finished", 00:28:09.740 "queue_depth": 16, 00:28:09.740 "io_size": 131072, 00:28:09.740 "runtime": 2.005228, 00:28:09.740 "iops": 5885.116305976178, 00:28:09.740 "mibps": 735.6395382470223, 00:28:09.740 "io_failed": 0, 00:28:09.740 "io_timeout": 0, 00:28:09.740 "avg_latency_us": 2713.794948450696, 00:28:09.740 "min_latency_us": 1126.4, 00:28:09.740 "max_latency_us": 11578.026666666667 00:28:09.740 } 00:28:09.740 ], 00:28:09.740 "core_count": 1 00:28:09.740 } 00:28:09.740 14:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:09.740 14:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:09.740 14:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:09.740 14:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:09.740 | select(.opcode=="crc32c") 00:28:09.740 | "\(.module_name) \(.executed)"' 00:28:09.740 14:15:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:10.001 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:10.001 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:10.001 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:10.001 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:10.001 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1194064 00:28:10.001 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1194064 ']' 00:28:10.001 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1194064 00:28:10.001 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:10.001 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:10.001 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1194064 00:28:10.001 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:10.001 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:10.001 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1194064' 00:28:10.001 killing process with pid 1194064 00:28:10.001 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1194064 00:28:10.001 Received shutdown signal, test time was about 2.000000 seconds 00:28:10.001 00:28:10.001 Latency(us) 00:28:10.001 [2024-10-30T13:15:08.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.001 [2024-10-30T13:15:08.300Z] =================================================================================================================== 00:28:10.001 [2024-10-30T13:15:08.300Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:10.001 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1194064 00:28:10.264 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1191505 00:28:10.264 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1191505 ']' 00:28:10.264 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1191505 00:28:10.264 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:10.264 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:10.264 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1191505 00:28:10.264 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:10.264 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:10.264 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1191505' 00:28:10.264 killing process with pid 1191505 00:28:10.264 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1191505 00:28:10.264 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1191505 00:28:10.264 00:28:10.264 real 0m16.626s 00:28:10.264 user 0m32.913s 00:28:10.264 sys 0m3.688s 00:28:10.264 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:10.264 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:10.264 ************************************ 00:28:10.264 END TEST nvmf_digest_clean 00:28:10.264 ************************************ 00:28:10.264 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:10.264 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:10.264 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:10.264 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:10.264 ************************************ 00:28:10.264 START TEST nvmf_digest_error 00:28:10.264 ************************************ 00:28:10.264 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:28:10.264 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:10.264 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:10.264 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:10.264 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:10.525 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1195026 00:28:10.525 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1195026 00:28:10.525 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:10.525 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1195026 ']' 00:28:10.525 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.525 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:10.525 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.525 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:10.525 14:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:10.525 [2024-10-30 14:15:08.624988] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:28:10.525 [2024-10-30 14:15:08.625041] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:10.525 [2024-10-30 14:15:08.715255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.525 [2024-10-30 14:15:08.746225] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.525 [2024-10-30 14:15:08.746255] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.525 [2024-10-30 14:15:08.746261] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:10.525 [2024-10-30 14:15:08.746266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:10.525 [2024-10-30 14:15:08.746270] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.525 [2024-10-30 14:15:08.746730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.471 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:11.471 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:11.471 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:11.471 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:11.471 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.471 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:11.471 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:11.471 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.471 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.471 [2024-10-30 14:15:09.452710] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:11.471 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.471 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:11.471 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:11.471 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.471 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.471 null0 00:28:11.472 [2024-10-30 14:15:09.526578] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:11.472 [2024-10-30 14:15:09.550781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:11.472 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.472 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:11.472 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:11.472 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:11.472 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:11.472 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:11.472 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1195185 00:28:11.472 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1195185 /var/tmp/bperf.sock 00:28:11.472 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1195185 ']' 00:28:11.472 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:11.472 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:11.472 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:11.472 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:11.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:11.472 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:11.472 14:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.472 [2024-10-30 14:15:09.608157] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:28:11.472 [2024-10-30 14:15:09.608207] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195185 ] 00:28:11.472 [2024-10-30 14:15:09.690964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.472 [2024-10-30 14:15:09.720894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.415 14:15:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:12.415 14:15:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:12.415 14:15:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:12.415 14:15:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:12.415 14:15:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:12.415 14:15:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.415 14:15:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:12.415 14:15:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.415 14:15:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:12.415 14:15:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:12.676 nvme0n1 00:28:12.939 14:15:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:12.939 14:15:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.939 14:15:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:12.939 14:15:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.939 14:15:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:12.939 14:15:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:12.939 Running I/O for 2 seconds... 00:28:12.939 [2024-10-30 14:15:11.097403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:12.939 [2024-10-30 14:15:11.097436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.939 [2024-10-30 14:15:11.097446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.939 [2024-10-30 14:15:11.106103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:12.939 [2024-10-30 14:15:11.106123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.939 [2024-10-30 14:15:11.106130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.939 [2024-10-30 14:15:11.116485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:12.939 [2024-10-30 14:15:11.116505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.939 [2024-10-30 14:15:11.116512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.939 [2024-10-30 14:15:11.126015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:12.939 [2024-10-30 14:15:11.126033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.939 [2024-10-30 14:15:11.126039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.939 [2024-10-30 14:15:11.135325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:12.939 [2024-10-30 14:15:11.135343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.939 [2024-10-30 14:15:11.135350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.939 [2024-10-30 14:15:11.144596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:12.939 [2024-10-30 14:15:11.144615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.939 [2024-10-30 14:15:11.144622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.939 [2024-10-30 14:15:11.154290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:12.939 [2024-10-30 14:15:11.154307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.939 [2024-10-30 14:15:11.154314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.939 [2024-10-30 14:15:11.162824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:12.939 [2024-10-30 14:15:11.162841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.939 [2024-10-30 14:15:11.162848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.939 [2024-10-30 14:15:11.171874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:12.939 [2024-10-30 14:15:11.171892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.939 [2024-10-30 14:15:11.171899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.939 [2024-10-30 14:15:11.181730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:12.939 [2024-10-30 14:15:11.181752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.939 [2024-10-30 14:15:11.181759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.939 [2024-10-30 14:15:11.189872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:12.939 [2024-10-30 14:15:11.189890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.939 [2024-10-30 14:15:11.189896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.939 [2024-10-30 14:15:11.198954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:12.939 [2024-10-30 14:15:11.198971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.939 [2024-10-30 14:15:11.198978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.939 [2024-10-30 14:15:11.208842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:12.939 [2024-10-30 14:15:11.208859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.939 [2024-10-30 14:15:11.208866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.939 [2024-10-30 14:15:11.219113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:12.939 [2024-10-30 14:15:11.219131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.939 [2024-10-30 14:15:11.219137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.939 [2024-10-30 14:15:11.228867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:12.939 [2024-10-30 14:15:11.228884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.939 [2024-10-30 14:15:11.228891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.939 [2024-10-30 14:15:11.237467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:12.939 [2024-10-30 14:15:11.237484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.939 [2024-10-30 14:15:11.237491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.202 [2024-10-30 14:15:11.246882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.202 [2024-10-30 14:15:11.246899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.202 [2024-10-30 14:15:11.246910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.202 [2024-10-30 14:15:11.256032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.202 [2024-10-30 14:15:11.256049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.202 [2024-10-30 14:15:11.256056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.202 [2024-10-30 14:15:11.264912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.202 [2024-10-30 14:15:11.264929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.202 [2024-10-30 14:15:11.264936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.202 [2024-10-30 14:15:11.273612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.202 [2024-10-30 14:15:11.273629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.202 [2024-10-30 14:15:11.273636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.202 [2024-10-30 14:15:11.282405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.202 [2024-10-30 14:15:11.282422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.202 [2024-10-30 14:15:11.282429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.202 [2024-10-30 14:15:11.292688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.202 [2024-10-30 14:15:11.292706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.202 [2024-10-30 14:15:11.292712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.202 [2024-10-30 14:15:11.301835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.202 [2024-10-30 14:15:11.301853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.202 [2024-10-30 14:15:11.301860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.202 [2024-10-30 14:15:11.310138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.202 [2024-10-30 14:15:11.310156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.202 [2024-10-30 14:15:11.310163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.202 [2024-10-30 14:15:11.319121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.202 [2024-10-30 14:15:11.319138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.202 [2024-10-30 14:15:11.319145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.202 [2024-10-30 14:15:11.328275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.202 [2024-10-30 14:15:11.328296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.202 [2024-10-30 14:15:11.328302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.202 [2024-10-30 14:15:11.338090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.202 [2024-10-30 14:15:11.338108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.202 [2024-10-30 14:15:11.338114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.202 [2024-10-30 14:15:11.348006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.202 [2024-10-30 14:15:11.348023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.202 [2024-10-30 14:15:11.348030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.202 [2024-10-30 14:15:11.356370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.202 [2024-10-30 14:15:11.356387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.202 [2024-10-30 14:15:11.356394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.202 [2024-10-30 14:15:11.365994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.202 [2024-10-30 14:15:11.366011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.202 [2024-10-30 14:15:11.366018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.202 [2024-10-30 14:15:11.374370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.202 [2024-10-30 14:15:11.374387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.202 [2024-10-30 14:15:11.374394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.202 [2024-10-30 14:15:11.384128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.202 [2024-10-30 14:15:11.384146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.202 [2024-10-30 14:15:11.384152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.202 [2024-10-30 14:15:11.392866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.202 [2024-10-30 14:15:11.392884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.202 [2024-10-30 14:15:11.392890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.202 [2024-10-30 14:15:11.401656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.202 [2024-10-30 14:15:11.401673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.202 [2024-10-30 14:15:11.401680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.202 [2024-10-30 14:15:11.410652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.202 [2024-10-30 14:15:11.410669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.202 [2024-10-30 14:15:11.410675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.202 [2024-10-30 14:15:11.419697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.202 [2024-10-30 14:15:11.419714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.202 [2024-10-30 14:15:11.419721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.202 [2024-10-30 14:15:11.430136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.202 [2024-10-30 14:15:11.430154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.202 [2024-10-30 14:15:11.430160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.202 [2024-10-30 14:15:11.439684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.202 [2024-10-30 14:15:11.439701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.202 [2024-10-30 14:15:11.439708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.202 [2024-10-30 14:15:11.447757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.203 [2024-10-30 14:15:11.447775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.203 [2024-10-30 14:15:11.447781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.203 [2024-10-30 14:15:11.458526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.203 [2024-10-30 14:15:11.458544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.203 [2024-10-30 14:15:11.458551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.203 [2024-10-30 14:15:11.467102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.203 [2024-10-30 14:15:11.467120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.203 [2024-10-30 14:15:11.467126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.203 [2024-10-30 14:15:11.476811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.203 [2024-10-30 14:15:11.476828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.203 [2024-10-30 14:15:11.476835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.203 [2024-10-30 14:15:11.485783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.203 [2024-10-30 14:15:11.485800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.203 [2024-10-30 14:15:11.485809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.203 [2024-10-30 14:15:11.494309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.203 [2024-10-30 14:15:11.494327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.203 [2024-10-30 14:15:11.494333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.466 [2024-10-30 14:15:11.504304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.466 [2024-10-30 14:15:11.504322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.466 [2024-10-30 14:15:11.504329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.466 [2024-10-30 14:15:11.515235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.466 [2024-10-30 14:15:11.515256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.466 [2024-10-30 14:15:11.515263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.466 [2024-10-30 14:15:11.523636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.466 [2024-10-30 14:15:11.523654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.466 [2024-10-30 14:15:11.523660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.466 [2024-10-30 14:15:11.532659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.466 [2024-10-30 14:15:11.532676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.466 [2024-10-30 14:15:11.532683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.466 [2024-10-30 14:15:11.543008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.466 [2024-10-30 14:15:11.543025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.466 [2024-10-30 14:15:11.543032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.466 [2024-10-30 14:15:11.552364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.466 [2024-10-30 14:15:11.552381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.466 [2024-10-30 14:15:11.552388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.466 [2024-10-30 14:15:11.561960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.466 [2024-10-30 14:15:11.561978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.466 [2024-10-30 14:15:11.561984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.466 [2024-10-30 14:15:11.570172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.466 [2024-10-30 14:15:11.570189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.466 [2024-10-30 14:15:11.570196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.466 [2024-10-30 14:15:11.580603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.466 [2024-10-30 14:15:11.580621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.466 [2024-10-30 14:15:11.580627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.466 [2024-10-30 14:15:11.588687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.466 [2024-10-30 14:15:11.588704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.466 [2024-10-30 14:15:11.588711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.466 [2024-10-30 14:15:11.599387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.466 [2024-10-30 14:15:11.599405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.466 [2024-10-30 14:15:11.599412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.466 [2024-10-30 14:15:11.607644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.466 [2024-10-30 14:15:11.607662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.466 [2024-10-30 14:15:11.607669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.466 [2024-10-30 14:15:11.617635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.466 [2024-10-30 14:15:11.617653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.466 [2024-10-30 14:15:11.617660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.466 [2024-10-30 14:15:11.630056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.466 [2024-10-30 14:15:11.630075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.466 [2024-10-30 14:15:11.630082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.466 [2024-10-30 14:15:11.640542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.466 [2024-10-30 14:15:11.640560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.466 [2024-10-30 14:15:11.640567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.466 [2024-10-30 14:15:11.651873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.466 [2024-10-30 14:15:11.651891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.466 [2024-10-30 14:15:11.651901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.466 [2024-10-30 14:15:11.662232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.466 [2024-10-30 14:15:11.662250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.466 [2024-10-30 14:15:11.662256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.466 [2024-10-30 14:15:11.670978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.466 [2024-10-30 14:15:11.670995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.466 [2024-10-30 14:15:11.671002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.466 [2024-10-30 14:15:11.679352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.466 [2024-10-30 14:15:11.679370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.466 [2024-10-30 14:15:11.679376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.466 [2024-10-30 14:15:11.689592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.467 [2024-10-30 14:15:11.689610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.467 [2024-10-30 14:15:11.689616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.467 [2024-10-30 14:15:11.701380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.467 [2024-10-30 14:15:11.701397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.467 [2024-10-30 14:15:11.701404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.467 [2024-10-30 14:15:11.710642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.467 [2024-10-30 14:15:11.710659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.467 [2024-10-30 14:15:11.710665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.467 [2024-10-30 14:15:11.719763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.467 [2024-10-30 14:15:11.719781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.467 [2024-10-30 14:15:11.719787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.467 [2024-10-30 14:15:11.729647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.467 [2024-10-30 14:15:11.729664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.467 [2024-10-30 14:15:11.729671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.467 [2024-10-30 14:15:11.738641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.467 [2024-10-30 14:15:11.738661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.467 [2024-10-30 14:15:11.738667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.467 [2024-10-30 14:15:11.747854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.467 [2024-10-30 14:15:11.747873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.467 [2024-10-30 14:15:11.747879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.467 [2024-10-30 14:15:11.756581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.467 [2024-10-30 14:15:11.756598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.467 [2024-10-30 14:15:11.756605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.765222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.765240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.765246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.774996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.775013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.775019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.782944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.782961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.782968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.792586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.792603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.792609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.801390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.801407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.801413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.810481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.810499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.810505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.820182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.820200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.820206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.828626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.828643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.828650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.838383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.838400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.838407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.846684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.846701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.846708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.855798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.855816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.855822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.865277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.865294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.865301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.873607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.873625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.873633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.883087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.883104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.883111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.892781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.892799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.892808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.904204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.904222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.904228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.912304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.912321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.912328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.922635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.922653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.922659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.930332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.930349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.930356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.940618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.940635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.940641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.950133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.950150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.950157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.958738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.958759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.958766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.968097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.968115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.968121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.977355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.977372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.977379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.986810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.986827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.986833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:11.996295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:11.996312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:11.996319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:12.006525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:12.006542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.730 [2024-10-30 14:15:12.006549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.730 [2024-10-30 14:15:12.015295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.730 [2024-10-30 14:15:12.015313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.731 [2024-10-30 14:15:12.015320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.731 [2024-10-30 14:15:12.022822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.731 [2024-10-30 14:15:12.022839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.731 [2024-10-30 14:15:12.022846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.993 [2024-10-30 14:15:12.033053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.993 [2024-10-30 14:15:12.033070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.993 [2024-10-30 14:15:12.033077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.993 [2024-10-30 14:15:12.041311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.993 [2024-10-30 14:15:12.041329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.993 [2024-10-30 14:15:12.041335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.993 [2024-10-30 14:15:12.051700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.993 [2024-10-30 14:15:12.051718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.993 [2024-10-30 14:15:12.051728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.993 [2024-10-30 14:15:12.061508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.993 [2024-10-30 14:15:12.061525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.993 [2024-10-30 14:15:12.061532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.993 [2024-10-30 14:15:12.069496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.993 [2024-10-30 14:15:12.069513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.993 [2024-10-30 14:15:12.069520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.993 26950.00 IOPS, 105.27 MiB/s [2024-10-30T13:15:12.292Z] [2024-10-30 14:15:12.079137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.993 [2024-10-30 14:15:12.079154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.993 [2024-10-30 14:15:12.079161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.993 [2024-10-30 14:15:12.088488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.993 [2024-10-30 14:15:12.088505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.993 [2024-10-30 14:15:12.088512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.993 [2024-10-30 14:15:12.096761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.993 [2024-10-30 14:15:12.096777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.993 [2024-10-30 14:15:12.096784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.993 [2024-10-30 14:15:12.107609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.993 [2024-10-30 14:15:12.107627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.993 [2024-10-30 14:15:12.107634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.993 [2024-10-30 14:15:12.117156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.993 [2024-10-30 14:15:12.117174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.993 [2024-10-30 14:15:12.117180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.993 [2024-10-30 14:15:12.126243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.993 [2024-10-30 14:15:12.126260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.993 [2024-10-30 14:15:12.126267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.993 [2024-10-30 14:15:12.134453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.993 [2024-10-30 14:15:12.134473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.993 [2024-10-30 14:15:12.134480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.993 [2024-10-30 14:15:12.144362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.993 [2024-10-30 14:15:12.144380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.993 [2024-10-30 14:15:12.144387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.993 [2024-10-30 14:15:12.153407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.993 [2024-10-30 14:15:12.153423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.993 [2024-10-30 14:15:12.153430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.993 [2024-10-30 14:15:12.161714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.993 [2024-10-30 14:15:12.161731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.993 [2024-10-30 14:15:12.161738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.993 [2024-10-30 14:15:12.170682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.993 [2024-10-30 14:15:12.170699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.994 [2024-10-30 14:15:12.170706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.994 [2024-10-30 14:15:12.180033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.994 [2024-10-30 14:15:12.180050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.994 [2024-10-30 14:15:12.180056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.994 [2024-10-30 14:15:12.188475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.994 [2024-10-30 14:15:12.188492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.994 [2024-10-30 14:15:12.188498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.994 [2024-10-30 14:15:12.198033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.994 [2024-10-30 14:15:12.198050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.994 [2024-10-30 14:15:12.198056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.994 [2024-10-30 14:15:12.207882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.994 [2024-10-30 14:15:12.207899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.994 [2024-10-30 14:15:12.207906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.994 [2024-10-30 14:15:12.216526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.994 [2024-10-30 14:15:12.216544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.994 [2024-10-30 14:15:12.216550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.994 [2024-10-30 14:15:12.225059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.994 [2024-10-30 14:15:12.225076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.994 [2024-10-30 14:15:12.225083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.994 [2024-10-30 14:15:12.235204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.994 [2024-10-30 14:15:12.235221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.994 [2024-10-30 14:15:12.235227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.994 [2024-10-30 14:15:12.244695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.994 [2024-10-30 14:15:12.244712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.994 [2024-10-30 14:15:12.244719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.994 [2024-10-30 14:15:12.255535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.994 [2024-10-30 14:15:12.255553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.994 [2024-10-30 14:15:12.255559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.994 [2024-10-30 14:15:12.265168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.994 [2024-10-30 14:15:12.265185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.994 [2024-10-30 14:15:12.265191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.994 [2024-10-30 14:15:12.274230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.994 [2024-10-30 14:15:12.274246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.994 [2024-10-30 14:15:12.274253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.994 [2024-10-30 14:15:12.283471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.994 [2024-10-30 14:15:12.283487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.994 [2024-10-30 14:15:12.283494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.994 [2024-10-30 14:15:12.291225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:13.994 [2024-10-30 14:15:12.291242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.994 [2024-10-30 14:15:12.291251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.256 [2024-10-30 14:15:12.300906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.256 [2024-10-30 14:15:12.300923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.256 [2024-10-30 14:15:12.300930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.256 [2024-10-30 14:15:12.311312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.256 [2024-10-30 14:15:12.311329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.256 [2024-10-30 14:15:12.311335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.256 [2024-10-30 14:15:12.321095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.256 [2024-10-30 14:15:12.321112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.256 [2024-10-30 14:15:12.321118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.256 [2024-10-30 14:15:12.330136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.256 [2024-10-30 14:15:12.330153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.256 [2024-10-30 14:15:12.330159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.256 [2024-10-30 14:15:12.338780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.256 [2024-10-30 14:15:12.338796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.256 [2024-10-30 14:15:12.338803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.256 [2024-10-30 14:15:12.347937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.256 [2024-10-30 14:15:12.347954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.256 [2024-10-30 14:15:12.347960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.256 [2024-10-30 14:15:12.356601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.256 [2024-10-30 14:15:12.356618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.256 [2024-10-30 14:15:12.356624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.256 [2024-10-30 14:15:12.365865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.256 [2024-10-30 14:15:12.365881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.257 [2024-10-30 14:15:12.365888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.257 [2024-10-30 14:15:12.374670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.257 [2024-10-30 14:15:12.374687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.257 [2024-10-30 14:15:12.374694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.257 [2024-10-30 14:15:12.383254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.257 [2024-10-30 14:15:12.383271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.257 [2024-10-30 14:15:12.383278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.257 [2024-10-30 14:15:12.392632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.257 [2024-10-30 14:15:12.392649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.257 [2024-10-30 14:15:12.392655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.257 [2024-10-30 14:15:12.403413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.257 [2024-10-30 14:15:12.403431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.257 [2024-10-30 14:15:12.403437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.257 [2024-10-30 14:15:12.412881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.257 [2024-10-30 14:15:12.412899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.257 [2024-10-30 14:15:12.412905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.257 [2024-10-30 14:15:12.422263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.257 [2024-10-30 14:15:12.422280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.257 [2024-10-30 14:15:12.422286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.257 [2024-10-30 14:15:12.430653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.257 [2024-10-30 14:15:12.430670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.257 [2024-10-30 14:15:12.430676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.257 [2024-10-30 14:15:12.441158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.257 [2024-10-30 14:15:12.441175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.257 [2024-10-30 14:15:12.441182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.257 [2024-10-30 14:15:12.449791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.257 [2024-10-30 14:15:12.449808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.257 [2024-10-30 14:15:12.449818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.257 [2024-10-30 14:15:12.460613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.257 [2024-10-30 14:15:12.460630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.257 [2024-10-30 14:15:12.460637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.257 [2024-10-30 14:15:12.469757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.257 [2024-10-30 14:15:12.469774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.257 [2024-10-30 14:15:12.469781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.257 [2024-10-30 14:15:12.478964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.257 [2024-10-30 14:15:12.478981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.257 [2024-10-30 14:15:12.478987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.257 [2024-10-30 14:15:12.488218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.257 [2024-10-30 14:15:12.488235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.257 [2024-10-30 14:15:12.488242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.257 [2024-10-30 14:15:12.497623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.257 [2024-10-30 14:15:12.497639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.257 [2024-10-30 14:15:12.497646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.257 [2024-10-30 14:15:12.505606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.257 [2024-10-30 14:15:12.505622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.257 [2024-10-30 14:15:12.505629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.257 [2024-10-30 14:15:12.516025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.257 [2024-10-30 14:15:12.516043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.257 [2024-10-30 14:15:12.516049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.257 [2024-10-30 14:15:12.526414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.257 [2024-10-30 14:15:12.526431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.257 [2024-10-30 14:15:12.526437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.257 [2024-10-30 14:15:12.537164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.257 [2024-10-30 14:15:12.537184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.257 [2024-10-30 14:15:12.537190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.257 [2024-10-30 14:15:12.545306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.257 [2024-10-30 14:15:12.545323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.257 [2024-10-30 14:15:12.545329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.519 [2024-10-30 14:15:12.556133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.519 [2024-10-30 14:15:12.556150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.519 [2024-10-30 14:15:12.556157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.519 [2024-10-30 14:15:12.566322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.519 [2024-10-30 14:15:12.566340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.519 [2024-10-30 14:15:12.566346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.519 [2024-10-30 14:15:12.574777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.519 [2024-10-30 14:15:12.574794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.519 [2024-10-30 14:15:12.574801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.519 [2024-10-30 14:15:12.584702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.519 [2024-10-30 14:15:12.584719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.519 [2024-10-30 14:15:12.584725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.519 [2024-10-30 14:15:12.592866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.519 [2024-10-30 14:15:12.592883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.519 [2024-10-30 14:15:12.592890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.519 [2024-10-30 14:15:12.601587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.519 [2024-10-30 14:15:12.601603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.519 [2024-10-30 14:15:12.601610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.519 [2024-10-30 14:15:12.611067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.519 [2024-10-30 14:15:12.611084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.519 [2024-10-30 14:15:12.611090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.519 [2024-10-30 14:15:12.620705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.519 [2024-10-30 14:15:12.620723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.519 [2024-10-30 14:15:12.620729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.519 [2024-10-30 14:15:12.629985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.519 [2024-10-30 14:15:12.630002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.519 [2024-10-30 14:15:12.630008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.519 [2024-10-30 14:15:12.639184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.519 [2024-10-30 14:15:12.639202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.519 [2024-10-30 14:15:12.639208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.519 [2024-10-30 14:15:12.647886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.519 [2024-10-30 14:15:12.647903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.519 [2024-10-30 14:15:12.647909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.519 [2024-10-30 14:15:12.656192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.520 [2024-10-30 14:15:12.656210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.520 [2024-10-30 14:15:12.656216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.520 [2024-10-30 14:15:12.665729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.520 [2024-10-30 14:15:12.665750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.520 [2024-10-30 14:15:12.665757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.520 [2024-10-30 14:15:12.675609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.520 [2024-10-30 14:15:12.675626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.520 [2024-10-30 14:15:12.675632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.520 [2024-10-30 14:15:12.684042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.520 [2024-10-30 14:15:12.684059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.520 [2024-10-30 14:15:12.684066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.520 [2024-10-30 14:15:12.692902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.520 [2024-10-30 14:15:12.692920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.520 [2024-10-30 14:15:12.692929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.520 [2024-10-30 14:15:12.701956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.520 [2024-10-30 14:15:12.701973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.520 [2024-10-30 14:15:12.701980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.520 [2024-10-30 14:15:12.711172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.520 [2024-10-30 14:15:12.711189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.520 [2024-10-30 14:15:12.711196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.520 [2024-10-30 14:15:12.720505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.520 [2024-10-30 14:15:12.720523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.520 [2024-10-30 14:15:12.720530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.520 [2024-10-30 14:15:12.730658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.520 [2024-10-30 14:15:12.730675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.520 [2024-10-30 14:15:12.730682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.520 [2024-10-30 14:15:12.739552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.520 [2024-10-30 14:15:12.739569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.520 [2024-10-30 14:15:12.739576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.520 [2024-10-30 14:15:12.748046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.520 [2024-10-30 14:15:12.748063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.520 [2024-10-30 14:15:12.748070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.520 [2024-10-30 14:15:12.757240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.520 [2024-10-30 14:15:12.757257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.520 [2024-10-30 14:15:12.757264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.520 [2024-10-30 14:15:12.765670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.520 [2024-10-30 14:15:12.765687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.520 [2024-10-30 14:15:12.765693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.520 [2024-10-30 14:15:12.774761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.520 [2024-10-30 14:15:12.774778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.520 [2024-10-30 14:15:12.774785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.520 [2024-10-30 14:15:12.786428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.520 [2024-10-30 14:15:12.786445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.520 [2024-10-30 14:15:12.786452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.520 [2024-10-30 14:15:12.795753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.520 [2024-10-30 14:15:12.795771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.520 [2024-10-30 14:15:12.795778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.520 [2024-10-30 14:15:12.804666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.520 [2024-10-30 14:15:12.804683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.520 [2024-10-30 14:15:12.804690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.520 [2024-10-30 14:15:12.813039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.520 [2024-10-30 14:15:12.813057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.520 [2024-10-30 14:15:12.813063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:12.823235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.782 [2024-10-30 14:15:12.823253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.782 [2024-10-30 14:15:12.823260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:12.833672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.782 [2024-10-30 14:15:12.833689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.782 [2024-10-30 14:15:12.833696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:12.841330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.782 [2024-10-30 14:15:12.841347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.782 [2024-10-30 14:15:12.841354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:12.852605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.782 [2024-10-30 14:15:12.852622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.782 [2024-10-30 14:15:12.852632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:12.863135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.782 [2024-10-30 14:15:12.863153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.782 [2024-10-30 14:15:12.863160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:12.871241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.782 [2024-10-30 14:15:12.871258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.782 [2024-10-30 14:15:12.871265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:12.882518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.782 [2024-10-30 14:15:12.882535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.782 [2024-10-30 14:15:12.882542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:12.891347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.782 [2024-10-30 14:15:12.891364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.782 [2024-10-30 14:15:12.891370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:12.900283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.782 [2024-10-30 14:15:12.900300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.782 [2024-10-30 14:15:12.900307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:12.907983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.782 [2024-10-30 14:15:12.908000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.782 [2024-10-30 14:15:12.908007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:12.918383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.782 [2024-10-30 14:15:12.918400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.782 [2024-10-30 14:15:12.918407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:12.930635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.782 [2024-10-30 14:15:12.930653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.782 [2024-10-30 14:15:12.930659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:12.938156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.782 [2024-10-30 14:15:12.938175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.782 [2024-10-30 14:15:12.938182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:12.949519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.782 [2024-10-30 14:15:12.949536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.782 [2024-10-30 14:15:12.949542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:12.960479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.782 [2024-10-30 14:15:12.960496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.782 [2024-10-30 14:15:12.960503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:12.970042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.782 [2024-10-30 14:15:12.970058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.782 [2024-10-30 14:15:12.970065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:12.981523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.782 [2024-10-30 14:15:12.981541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.782 [2024-10-30 14:15:12.981547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:12.991665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.782 [2024-10-30 14:15:12.991683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.782 [2024-10-30 14:15:12.991689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:13.003578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.782 [2024-10-30 14:15:13.003596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.782 [2024-10-30 14:15:13.003603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:13.011914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.782 [2024-10-30 14:15:13.011933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.782 [2024-10-30 14:15:13.011940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:13.022419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.782 [2024-10-30 14:15:13.022436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.782 [2024-10-30 14:15:13.022442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:13.032412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.782 [2024-10-30 14:15:13.032429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.782 [2024-10-30 14:15:13.032435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:13.041837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.782 [2024-10-30 14:15:13.041855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.782 [2024-10-30 14:15:13.041861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:13.051206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.782 [2024-10-30 14:15:13.051224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.782 [2024-10-30 14:15:13.051230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.782 [2024-10-30 14:15:13.060758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.783 [2024-10-30 14:15:13.060775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.783 [2024-10-30 14:15:13.060781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.783 [2024-10-30 14:15:13.070138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.783 [2024-10-30 14:15:13.070155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.783 [2024-10-30 14:15:13.070161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.783 27024.00 IOPS, 105.56 MiB/s [2024-10-30T13:15:13.082Z] [2024-10-30 14:15:13.079639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24145b0) 00:28:14.783 [2024-10-30 14:15:13.079654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.783 [2024-10-30 14:15:13.079660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.045 00:28:15.045 Latency(us) 00:28:15.045 [2024-10-30T13:15:13.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.045 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:15.046 nvme0n1 : 2.00 27043.18 105.64 0.00 0.00 4729.06 2211.84 19114.67 00:28:15.046 [2024-10-30T13:15:13.345Z] =================================================================================================================== 00:28:15.046 [2024-10-30T13:15:13.345Z] Total : 27043.18 105.64 0.00 0.00 4729.06 2211.84 19114.67 00:28:15.046 { 00:28:15.046 "results": [ 00:28:15.046 { 00:28:15.046 "job": "nvme0n1", 00:28:15.046 "core_mask": "0x2", 00:28:15.046 "workload": "randread", 00:28:15.046 "status": "finished", 00:28:15.046 "queue_depth": 128, 00:28:15.046 "io_size": 4096, 00:28:15.046 "runtime": 2.003315, 00:28:15.046 "iops": 27043.17593588627, 00:28:15.046 "mibps": 105.63740599955574, 00:28:15.046 "io_failed": 0, 00:28:15.046 "io_timeout": 0, 00:28:15.046 "avg_latency_us": 4729.056270919473, 00:28:15.046 "min_latency_us": 2211.84, 00:28:15.046 "max_latency_us": 19114.666666666668 00:28:15.046 } 00:28:15.046 ], 00:28:15.046 "core_count": 1 00:28:15.046 } 00:28:15.046 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:15.046 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:15.046 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:15.046 | .driver_specific 00:28:15.046 | .nvme_error 00:28:15.046 | .status_code 00:28:15.046 | .command_transient_transport_error' 00:28:15.046 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:15.046 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 212 > 0 )) 00:28:15.046 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1195185 00:28:15.046 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1195185 ']' 00:28:15.046 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1195185 00:28:15.046 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:15.046 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:15.046 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1195185 00:28:15.308 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:15.308 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:15.308 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1195185' 00:28:15.308 killing process with pid 1195185 00:28:15.308 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1195185 00:28:15.308 Received shutdown signal, test time was about 2.000000 seconds 00:28:15.308 00:28:15.308 Latency(us) 00:28:15.308 [2024-10-30T13:15:13.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.308 [2024-10-30T13:15:13.607Z] =================================================================================================================== 00:28:15.308 [2024-10-30T13:15:13.607Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:15.308 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1195185 00:28:15.308 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:15.308 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:15.308 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:15.308 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:15.309 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:15.309 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1196340 00:28:15.309 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1196340 /var/tmp/bperf.sock 00:28:15.309 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1196340 ']' 00:28:15.309 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:15.309 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:15.309 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:15.309 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:15.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:15.309 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:15.309 14:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:15.309 [2024-10-30 14:15:13.501696] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:28:15.309 [2024-10-30 14:15:13.501778] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196340 ] 00:28:15.309 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:15.309 Zero copy mechanism will not be used. 00:28:15.309 [2024-10-30 14:15:13.584860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.569 [2024-10-30 14:15:13.614059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.140 14:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:16.140 14:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:16.140 14:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:16.140 14:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:16.140 14:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:16.401 14:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.401 14:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.401 14:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.401 14:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:16.401 14:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:16.401 nvme0n1 00:28:16.401 14:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:16.401 14:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.401 14:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.401 14:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.401 14:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:16.401 14:15:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:16.663 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:16.663 Zero copy mechanism will not be used. 00:28:16.663 Running I/O for 2 seconds... 00:28:16.663 [2024-10-30 14:15:14.790695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.663 [2024-10-30 14:15:14.790730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.663 [2024-10-30 14:15:14.790740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.663 [2024-10-30 14:15:14.800863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.663 [2024-10-30 14:15:14.800891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.663 [2024-10-30 14:15:14.800899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.663 [2024-10-30 14:15:14.806587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.663 [2024-10-30 14:15:14.806606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.663 [2024-10-30 14:15:14.806613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.663 [2024-10-30 14:15:14.814015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.663 [2024-10-30 14:15:14.814034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.663 [2024-10-30 14:15:14.814041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.663 [2024-10-30 14:15:14.824243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.663 [2024-10-30 14:15:14.824262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.663 [2024-10-30 14:15:14.824269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.663 [2024-10-30 14:15:14.834201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.663 [2024-10-30 14:15:14.834220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.663 [2024-10-30 14:15:14.834226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.663 [2024-10-30 14:15:14.840570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.663 [2024-10-30 14:15:14.840589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.663 [2024-10-30 14:15:14.840596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.663 [2024-10-30 14:15:14.851103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.663 [2024-10-30 14:15:14.851124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.663 [2024-10-30 14:15:14.851131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.663 [2024-10-30 14:15:14.860073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.663 [2024-10-30 14:15:14.860093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.663 [2024-10-30 14:15:14.860100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.663 [2024-10-30 14:15:14.872674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.663 [2024-10-30 14:15:14.872692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.663 [2024-10-30 14:15:14.872698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.663 [2024-10-30 14:15:14.884524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.663 [2024-10-30 14:15:14.884544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.663 [2024-10-30 14:15:14.884550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.663 [2024-10-30 14:15:14.895851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.663 [2024-10-30 14:15:14.895870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.663 [2024-10-30 14:15:14.895877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.663 [2024-10-30 14:15:14.907995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.663 [2024-10-30 14:15:14.908014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.663 [2024-10-30 14:15:14.908021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.663 [2024-10-30 14:15:14.920372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.663 [2024-10-30 14:15:14.920391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.663 [2024-10-30 14:15:14.920398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.663 [2024-10-30 14:15:14.932516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.663 [2024-10-30 14:15:14.932535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.663 [2024-10-30 14:15:14.932542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.663 [2024-10-30 14:15:14.940381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.663 [2024-10-30 14:15:14.940400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.663 [2024-10-30 14:15:14.940407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.663 [2024-10-30 14:15:14.944836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.663 [2024-10-30 14:15:14.944855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.663 [2024-10-30 14:15:14.944862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.663 [2024-10-30 14:15:14.952698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.663 [2024-10-30 14:15:14.952717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.663 [2024-10-30 14:15:14.952724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.663 [2024-10-30 14:15:14.960422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.663 [2024-10-30 14:15:14.960440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.663 [2024-10-30 14:15:14.960450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.925 [2024-10-30 14:15:14.972298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.925 [2024-10-30 14:15:14.972318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.925 [2024-10-30 14:15:14.972324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.925 [2024-10-30 14:15:14.983314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.925 [2024-10-30 14:15:14.983333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.925 [2024-10-30 14:15:14.983339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.925 [2024-10-30 14:15:14.995471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.925 [2024-10-30 14:15:14.995490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.925 [2024-10-30 14:15:14.995496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.925 [2024-10-30 14:15:15.007649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.925 [2024-10-30 14:15:15.007668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.925 [2024-10-30 14:15:15.007675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.018520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.018539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.018546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.030321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.030340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.030347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.042457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.042476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.042483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.054344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.054363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.054369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.060997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.061020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.061026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.065655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.065674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.065681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.071616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.071634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.071641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.080500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.080518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.080525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.089326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.089346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.089352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.093517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.093535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.093543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.097599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.097617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.097624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.105517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.105537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.105543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.111484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.111503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.111510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.119923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.119941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.119948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.128422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.128441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.128448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.137678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.137698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.137704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.143699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.143718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.143724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.152400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.152419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.152425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.160058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.160077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.160083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.168792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.168810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.168817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.176804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.176823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.176830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.187837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.187856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.187865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.198605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.198625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.198631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.210481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.210501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.210508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.926 [2024-10-30 14:15:15.223554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:16.926 [2024-10-30 14:15:15.223573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.926 [2024-10-30 14:15:15.223579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.189 [2024-10-30 14:15:15.235825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.189 [2024-10-30 14:15:15.235844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.189 [2024-10-30 14:15:15.235851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.189 [2024-10-30 14:15:15.245384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.189 [2024-10-30 14:15:15.245403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.189 [2024-10-30 14:15:15.245410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.189 [2024-10-30 14:15:15.250161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.189 [2024-10-30 14:15:15.250180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.189 [2024-10-30 14:15:15.250187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.189 [2024-10-30 14:15:15.254752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.189 [2024-10-30 14:15:15.254771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.189 [2024-10-30 14:15:15.254777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.189 [2024-10-30 14:15:15.259737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.189 [2024-10-30 14:15:15.259761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.189 [2024-10-30 14:15:15.259768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.189 [2024-10-30 14:15:15.264742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.189 [2024-10-30 14:15:15.264765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.189 [2024-10-30 14:15:15.264772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.189 [2024-10-30 14:15:15.274575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.189 [2024-10-30 14:15:15.274594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.274600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.285836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.285854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.285861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.296660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.296679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.296686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.306937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.306957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.306963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.317435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.317454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.317461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.324039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.324058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.324065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.332255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.332273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.332280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.336679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.336698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.336707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.343695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.343714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.343721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.350681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.350701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.350707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.357144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.357164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.357170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.367857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.367877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.367883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.378588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.378608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.378614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.387134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.387153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.387159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.396611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.396630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.396636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.401326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.401345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.401353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.405785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.405807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.405813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.414210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.414229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.414236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.418709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.418727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.418734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.425636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.425655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.425661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.436328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.436347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.436354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.444520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.444539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.444546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.453912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.453931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.453937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.463998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.464017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.464023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.474736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.474760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.474767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.190 [2024-10-30 14:15:15.482865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.190 [2024-10-30 14:15:15.482884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.190 [2024-10-30 14:15:15.482891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.457 [2024-10-30 14:15:15.489730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.457 [2024-10-30 14:15:15.489757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.457 [2024-10-30 14:15:15.489763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.457 [2024-10-30 14:15:15.495432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.457 [2024-10-30 14:15:15.495451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.457 [2024-10-30 14:15:15.495458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.457 [2024-10-30 14:15:15.499834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.457 [2024-10-30 14:15:15.499853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.457 [2024-10-30 14:15:15.499859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.457 [2024-10-30 14:15:15.504472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.457 [2024-10-30 14:15:15.504491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.457 [2024-10-30 14:15:15.504497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.457 [2024-10-30 14:15:15.508945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.457 [2024-10-30 14:15:15.508963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.457 [2024-10-30 14:15:15.508969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.457 [2024-10-30 14:15:15.516513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.457 [2024-10-30 14:15:15.516533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.457 [2024-10-30 14:15:15.516539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.457 [2024-10-30 14:15:15.525064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.457 [2024-10-30 14:15:15.525083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.457 [2024-10-30 14:15:15.525089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.457 [2024-10-30 14:15:15.536326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.457 [2024-10-30 14:15:15.536345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.457 [2024-10-30 14:15:15.536354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.457 [2024-10-30 14:15:15.547971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.457 [2024-10-30 14:15:15.547990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.457 [2024-10-30 14:15:15.547996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.457 [2024-10-30 14:15:15.560211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.457 [2024-10-30 14:15:15.560230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.457 [2024-10-30 14:15:15.560236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.457 [2024-10-30 14:15:15.572607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.457 [2024-10-30 14:15:15.572626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.457 [2024-10-30 14:15:15.572633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.457 [2024-10-30 14:15:15.584769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.457 [2024-10-30 14:15:15.584788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.457 [2024-10-30 14:15:15.584794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.458 [2024-10-30 14:15:15.597641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.458 [2024-10-30 14:15:15.597660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.458 [2024-10-30 14:15:15.597667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.458 [2024-10-30 14:15:15.610235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.458 [2024-10-30 14:15:15.610254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.458 [2024-10-30 14:15:15.610261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.458 [2024-10-30 14:15:15.622533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.458 [2024-10-30 14:15:15.622551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.458 [2024-10-30 14:15:15.622558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.458 [2024-10-30 14:15:15.634506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.458 [2024-10-30 14:15:15.634525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.458 [2024-10-30 14:15:15.634531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.458 [2024-10-30 14:15:15.647386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.458 [2024-10-30 14:15:15.647408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.458 [2024-10-30 14:15:15.647414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.458 [2024-10-30 14:15:15.659259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.458 [2024-10-30 14:15:15.659278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.458 [2024-10-30 14:15:15.659285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.458 [2024-10-30 14:15:15.670948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.458 [2024-10-30 14:15:15.670966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.458 [2024-10-30 14:15:15.670973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.458 [2024-10-30 14:15:15.682891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.458 [2024-10-30 14:15:15.682910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.458 [2024-10-30 14:15:15.682917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.458 [2024-10-30 14:15:15.694387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.458 [2024-10-30 14:15:15.694405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.458 [2024-10-30 14:15:15.694412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.458 [2024-10-30 14:15:15.705728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.458 [2024-10-30 14:15:15.705753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.458 [2024-10-30 14:15:15.705760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.458 [2024-10-30 14:15:15.717886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.458 [2024-10-30 14:15:15.717904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.458 [2024-10-30 14:15:15.717910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.458 [2024-10-30 14:15:15.730063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.458 [2024-10-30 14:15:15.730082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.458 [2024-10-30 14:15:15.730089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.458 [2024-10-30 14:15:15.741284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.458 [2024-10-30 14:15:15.741302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.458 [2024-10-30 14:15:15.741309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.458 [2024-10-30 14:15:15.752011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.458 [2024-10-30 14:15:15.752030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.458 [2024-10-30 14:15:15.752037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.719 [2024-10-30 14:15:15.762662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.719 [2024-10-30 14:15:15.762681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.719 [2024-10-30 14:15:15.762687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.719 [2024-10-30 14:15:15.773953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.719 [2024-10-30 14:15:15.773971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.719 [2024-10-30 14:15:15.773978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 3360.00 IOPS, 420.00 MiB/s [2024-10-30T13:15:16.018Z] [2024-10-30 14:15:15.784944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.719 [2024-10-30 14:15:15.784964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.719 [2024-10-30 14:15:15.784971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.719 [2024-10-30 14:15:15.795007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.719 [2024-10-30 14:15:15.795025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.719 [2024-10-30 14:15:15.795032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.719 [2024-10-30 14:15:15.805075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.719 [2024-10-30 14:15:15.805093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.719 [2024-10-30 14:15:15.805099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.719 [2024-10-30 14:15:15.812579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.719 [2024-10-30 14:15:15.812597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.719 [2024-10-30 14:15:15.812604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-10-30 14:15:15.822742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.719 [2024-10-30 14:15:15.822766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.719 [2024-10-30 14:15:15.822772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.719 [2024-10-30 14:15:15.831653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.719 [2024-10-30 14:15:15.831675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.719 [2024-10-30 14:15:15.831682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.719 [2024-10-30 14:15:15.843847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.719 [2024-10-30 14:15:15.843865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.719 [2024-10-30 14:15:15.843871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.719 [2024-10-30 14:15:15.854964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.719 [2024-10-30 14:15:15.854981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.719 [2024-10-30 14:15:15.854988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-10-30 14:15:15.866565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.719 [2024-10-30 14:15:15.866583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.719 [2024-10-30 14:15:15.866590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.719 [2024-10-30 14:15:15.878086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.719 [2024-10-30 14:15:15.878103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.719 [2024-10-30 14:15:15.878110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.719 [2024-10-30 14:15:15.889170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.719 [2024-10-30 14:15:15.889188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.719 [2024-10-30 14:15:15.889194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.719 [2024-10-30 14:15:15.899663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.719 [2024-10-30 14:15:15.899681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.719 [2024-10-30 14:15:15.899688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-10-30 14:15:15.910979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.719 [2024-10-30 14:15:15.910997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.719 [2024-10-30 14:15:15.911004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.719 [2024-10-30 14:15:15.922827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.719 [2024-10-30 14:15:15.922845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.719 [2024-10-30 14:15:15.922851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.719 [2024-10-30 14:15:15.931824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.719 [2024-10-30 14:15:15.931842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.719 [2024-10-30 14:15:15.931849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.719 [2024-10-30 14:15:15.943037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.719 [2024-10-30 14:15:15.943055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.719 [2024-10-30 14:15:15.943062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-10-30 14:15:15.956015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.719 [2024-10-30 14:15:15.956033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.719 [2024-10-30 14:15:15.956039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.719 [2024-10-30 14:15:15.964839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.719 [2024-10-30 14:15:15.964857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.720 [2024-10-30 14:15:15.964863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.720 [2024-10-30 14:15:15.975959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.720 [2024-10-30 14:15:15.975977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.720 [2024-10-30 14:15:15.975983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.720 [2024-10-30 14:15:15.984656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.720 [2024-10-30 14:15:15.984674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.720 [2024-10-30 14:15:15.984680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.720 [2024-10-30 14:15:15.996626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.720 [2024-10-30 14:15:15.996644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.720 [2024-10-30 14:15:15.996651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.720 [2024-10-30 14:15:16.007508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.720 [2024-10-30 14:15:16.007526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.720 [2024-10-30 14:15:16.007532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.720 [2024-10-30 14:15:16.017459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.720 [2024-10-30 14:15:16.017477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.720 [2024-10-30 14:15:16.017489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.026092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.981 [2024-10-30 14:15:16.026111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.981 [2024-10-30 14:15:16.026117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.037005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.981 [2024-10-30 14:15:16.037023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.981 [2024-10-30 14:15:16.037030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.047878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.981 [2024-10-30 14:15:16.047896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.981 [2024-10-30 14:15:16.047902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.058969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.981 [2024-10-30 14:15:16.058987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.981 [2024-10-30 14:15:16.058993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.068695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.981 [2024-10-30 14:15:16.068713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.981 [2024-10-30 14:15:16.068719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.077933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.981 [2024-10-30 14:15:16.077951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.981 [2024-10-30 14:15:16.077957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.089473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.981 [2024-10-30 14:15:16.089491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.981 [2024-10-30 14:15:16.089498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.099925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.981 [2024-10-30 14:15:16.099943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.981 [2024-10-30 14:15:16.099949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.109941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.981 [2024-10-30 14:15:16.109961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.981 [2024-10-30 14:15:16.109968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.119857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.981 [2024-10-30 14:15:16.119874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.981 [2024-10-30 14:15:16.119881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.130059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.981 [2024-10-30 14:15:16.130077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.981 [2024-10-30 14:15:16.130084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.139845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.981 [2024-10-30 14:15:16.139863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.981 [2024-10-30 14:15:16.139870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.149183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.981 [2024-10-30 14:15:16.149201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.981 [2024-10-30 14:15:16.149208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.159008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.981 [2024-10-30 14:15:16.159025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.981 [2024-10-30 14:15:16.159032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.170069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.981 [2024-10-30 14:15:16.170087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.981 [2024-10-30 14:15:16.170093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.178147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.981 [2024-10-30 14:15:16.178165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.981 [2024-10-30 14:15:16.178172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.190739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.981 [2024-10-30 14:15:16.190760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.981 [2024-10-30 14:15:16.190767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.196576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.981 [2024-10-30 14:15:16.196594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.981 [2024-10-30 14:15:16.196600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.209042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.981 [2024-10-30 14:15:16.209060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.981 [2024-10-30 14:15:16.209066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.218356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.981 [2024-10-30 14:15:16.218375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.981 [2024-10-30 14:15:16.218381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.227745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.981 [2024-10-30 14:15:16.227768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.981 [2024-10-30 14:15:16.227775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.238058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.981 [2024-10-30 14:15:16.238077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.981 [2024-10-30 14:15:16.238083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.247328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.981 [2024-10-30 14:15:16.247346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.981 [2024-10-30 14:15:16.247352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.257220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.981 [2024-10-30 14:15:16.257238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.981 [2024-10-30 14:15:16.257244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.981 [2024-10-30 14:15:16.263785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.982 [2024-10-30 14:15:16.263803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.982 [2024-10-30 14:15:16.263810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.982 [2024-10-30 14:15:16.273078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:17.982 [2024-10-30 14:15:16.273099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.982 [2024-10-30 14:15:16.273106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.242 [2024-10-30 14:15:16.282307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.242 [2024-10-30 14:15:16.282326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.242 [2024-10-30 14:15:16.282332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.242 [2024-10-30 14:15:16.292901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.242 [2024-10-30 14:15:16.292919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.242 [2024-10-30 14:15:16.292926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.242 [2024-10-30 14:15:16.304000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.242 [2024-10-30 14:15:16.304017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.242 [2024-10-30 14:15:16.304024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.242 [2024-10-30 14:15:16.311485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.242 [2024-10-30 14:15:16.311503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.242 [2024-10-30 14:15:16.311509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.242 [2024-10-30 14:15:16.322345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.242 [2024-10-30 14:15:16.322363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.242 [2024-10-30 14:15:16.322369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.242 [2024-10-30 14:15:16.332663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.242 [2024-10-30 14:15:16.332681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.242 [2024-10-30 14:15:16.332687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.242 [2024-10-30 14:15:16.343083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.242 [2024-10-30 14:15:16.343101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.242 [2024-10-30 14:15:16.343107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.242 [2024-10-30 14:15:16.356058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.242 [2024-10-30 14:15:16.356076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.242 [2024-10-30 14:15:16.356082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.242 [2024-10-30 14:15:16.361129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.242 [2024-10-30 14:15:16.361148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.242 [2024-10-30 14:15:16.361155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.242 [2024-10-30 14:15:16.369757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.242 [2024-10-30 14:15:16.369775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.242 [2024-10-30 14:15:16.369782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.242 [2024-10-30 14:15:16.379739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.242 [2024-10-30 14:15:16.379761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.242 [2024-10-30 14:15:16.379767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.242 [2024-10-30 14:15:16.387049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.242 [2024-10-30 14:15:16.387067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.242 [2024-10-30 14:15:16.387074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.242 [2024-10-30 14:15:16.398405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.242 [2024-10-30 14:15:16.398423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.242 [2024-10-30 14:15:16.398430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.242 [2024-10-30 14:15:16.404105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.242 [2024-10-30 14:15:16.404123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.242 [2024-10-30 14:15:16.404129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.242 [2024-10-30 14:15:16.416575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.242 [2024-10-30 14:15:16.416593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.242 [2024-10-30 14:15:16.416599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.242 [2024-10-30 14:15:16.421978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.242 [2024-10-30 14:15:16.421996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.242 [2024-10-30 14:15:16.422003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.242 [2024-10-30 14:15:16.431073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.242 [2024-10-30 14:15:16.431090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.242 [2024-10-30 14:15:16.431100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.242 [2024-10-30 14:15:16.440557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.242 [2024-10-30 14:15:16.440575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.242 [2024-10-30 14:15:16.440582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.242 [2024-10-30 14:15:16.451674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.242 [2024-10-30 14:15:16.451691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.242 [2024-10-30 14:15:16.451698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.242 [2024-10-30 14:15:16.461218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.243 [2024-10-30 14:15:16.461236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.243 [2024-10-30 14:15:16.461242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.243 [2024-10-30 14:15:16.468870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.243 [2024-10-30 14:15:16.468888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.243 [2024-10-30 14:15:16.468895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.243 [2024-10-30 14:15:16.479986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.243 [2024-10-30 14:15:16.480004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.243 [2024-10-30 14:15:16.480010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.243 [2024-10-30 14:15:16.487768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.243 [2024-10-30 14:15:16.487786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.243 [2024-10-30 14:15:16.487793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.243 [2024-10-30 14:15:16.494617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.243 [2024-10-30 14:15:16.494635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.243 [2024-10-30 14:15:16.494642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.243 [2024-10-30 14:15:16.504719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.243 [2024-10-30 14:15:16.504737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.243 [2024-10-30 14:15:16.504744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.243 [2024-10-30 14:15:16.515850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.243 [2024-10-30 14:15:16.515871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.243 [2024-10-30 14:15:16.515878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.243 [2024-10-30 14:15:16.526511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.243 [2024-10-30 14:15:16.526529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.243 [2024-10-30 14:15:16.526536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.243 [2024-10-30 14:15:16.536990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.243 [2024-10-30 14:15:16.537008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.243 [2024-10-30 14:15:16.537014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.506 [2024-10-30 14:15:16.548679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.506 [2024-10-30 14:15:16.548697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.506 [2024-10-30 14:15:16.548704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.506 [2024-10-30 14:15:16.557324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.506 [2024-10-30 14:15:16.557343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.506 [2024-10-30 14:15:16.557349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.506 [2024-10-30 14:15:16.567107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.506 [2024-10-30 14:15:16.567126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.506 [2024-10-30 14:15:16.567133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.506 [2024-10-30 14:15:16.578703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.506 [2024-10-30 14:15:16.578723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.506 [2024-10-30 14:15:16.578729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.506 [2024-10-30 14:15:16.590655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.506 [2024-10-30 14:15:16.590674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.506 [2024-10-30 14:15:16.590680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.506 [2024-10-30 14:15:16.602759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.506 [2024-10-30 14:15:16.602778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.506 [2024-10-30 14:15:16.602787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.506 [2024-10-30 14:15:16.614518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.506 [2024-10-30 14:15:16.614538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.506 [2024-10-30 14:15:16.614545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.506 [2024-10-30 14:15:16.625625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.506 [2024-10-30 14:15:16.625644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.506 [2024-10-30 14:15:16.625650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.506 [2024-10-30 14:15:16.634539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.506 [2024-10-30 14:15:16.634558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.506 [2024-10-30 14:15:16.634565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.506 [2024-10-30 14:15:16.642651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.506 [2024-10-30 14:15:16.642671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.506 [2024-10-30 14:15:16.642677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.506 [2024-10-30 14:15:16.653283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.506 [2024-10-30 14:15:16.653302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.506 [2024-10-30 14:15:16.653309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.506 [2024-10-30 14:15:16.664476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.506 [2024-10-30 14:15:16.664495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.506 [2024-10-30 14:15:16.664502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.506 [2024-10-30 14:15:16.676705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.506 [2024-10-30 14:15:16.676724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.506 [2024-10-30 14:15:16.676730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.506 [2024-10-30 14:15:16.688691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.506 [2024-10-30 14:15:16.688711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.506 [2024-10-30 14:15:16.688717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.506 [2024-10-30 14:15:16.701199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.506 [2024-10-30 14:15:16.701220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.506 [2024-10-30 14:15:16.701227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.506 [2024-10-30 14:15:16.713117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.506 [2024-10-30 14:15:16.713135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.506 [2024-10-30 14:15:16.713142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.506 [2024-10-30 14:15:16.725539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.506 [2024-10-30 14:15:16.725557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.506 [2024-10-30 14:15:16.725563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.506 [2024-10-30 14:15:16.736998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.506 [2024-10-30 14:15:16.737016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.506 [2024-10-30 14:15:16.737023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.506 [2024-10-30 14:15:16.745164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.506 [2024-10-30 14:15:16.745183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.506 [2024-10-30 14:15:16.745190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.506 [2024-10-30 14:15:16.754985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.506 [2024-10-30 14:15:16.755004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.506 [2024-10-30 14:15:16.755010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.506 [2024-10-30 14:15:16.763893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.506 [2024-10-30 14:15:16.763912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.506 [2024-10-30 14:15:16.763919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.506 [2024-10-30 14:15:16.770550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.506 [2024-10-30 14:15:16.770568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.506 [2024-10-30 14:15:16.770575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.506 3222.50 IOPS, 402.81 MiB/s [2024-10-30T13:15:16.805Z] [2024-10-30 14:15:16.779782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531ab0) 00:28:18.506 [2024-10-30 14:15:16.779797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.506 [2024-10-30 14:15:16.779804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.506 00:28:18.506 Latency(us) 00:28:18.506 [2024-10-30T13:15:16.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.506 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:18.506 nvme0n1 : 2.00 3226.76 403.34 0.00 0.00 4954.22 645.12 12997.97 00:28:18.506 [2024-10-30T13:15:16.806Z] =================================================================================================================== 00:28:18.507 [2024-10-30T13:15:16.806Z] Total : 3226.76 403.34 0.00 0.00 4954.22 645.12 12997.97 00:28:18.507 { 00:28:18.507 "results": [ 00:28:18.507 { 00:28:18.507 "job": "nvme0n1", 00:28:18.507 "core_mask": "0x2", 00:28:18.507 "workload": "randread", 00:28:18.507 "status": "finished", 00:28:18.507 "queue_depth": 16, 00:28:18.507 "io_size": 131072, 00:28:18.507 "runtime": 2.002319, 00:28:18.507 "iops": 3226.7585734341033, 00:28:18.507 "mibps": 403.3448216792629, 00:28:18.507 "io_failed": 0, 00:28:18.507 "io_timeout": 0, 00:28:18.507 "avg_latency_us": 4954.2158633854415, 00:28:18.507 "min_latency_us": 645.12, 00:28:18.507 "max_latency_us": 12997.973333333333 00:28:18.507 } 00:28:18.507 ], 00:28:18.507 "core_count": 1 00:28:18.507 } 00:28:18.769 14:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:18.769 14:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:18.769 14:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:18.769 | .driver_specific 00:28:18.769 | .nvme_error 00:28:18.769 | .status_code 00:28:18.769 | .command_transient_transport_error' 00:28:18.769 14:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:18.769 14:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 208 > 0 )) 00:28:18.769 14:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1196340 00:28:18.769 14:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1196340 ']' 00:28:18.769 14:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1196340 00:28:18.769 14:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:18.769 14:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:18.769 14:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1196340 00:28:18.769 14:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:18.769 14:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:18.769 14:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1196340' 00:28:18.769 killing process with pid 1196340 00:28:18.769 14:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1196340 00:28:18.769 Received shutdown signal, test time was about 2.000000 seconds 00:28:18.769 00:28:18.769 Latency(us) 00:28:18.769 [2024-10-30T13:15:17.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.769 [2024-10-30T13:15:17.068Z] =================================================================================================================== 00:28:18.769 [2024-10-30T13:15:17.068Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:18.769 14:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1196340 00:28:19.030 14:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:19.030 14:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:19.030 14:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:19.030 14:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:19.030 14:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:19.030 14:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1197102 00:28:19.030 14:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1197102 /var/tmp/bperf.sock 00:28:19.030 14:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1197102 ']' 00:28:19.030 14:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:19.030 14:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:19.030 14:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:19.030 14:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:19.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:19.030 14:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:19.030 14:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:19.030 [2024-10-30 14:15:17.203105] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:28:19.030 [2024-10-30 14:15:17.203160] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197102 ] 00:28:19.030 [2024-10-30 14:15:17.285355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.030 [2024-10-30 14:15:17.314045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.971 14:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:19.971 14:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:19.971 14:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:19.971 14:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:19.971 14:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:19.971 14:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.971 14:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:19.971 14:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.971 14:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:19.971 14:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:20.541 nvme0n1 00:28:20.541 14:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:20.541 14:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.541 14:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:20.541 14:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.541 14:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:20.541 14:15:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:20.541 Running I/O for 2 seconds... 00:28:20.541 [2024-10-30 14:15:18.763384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e73e0 00:28:20.541 [2024-10-30 14:15:18.764296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.541 [2024-10-30 14:15:18.764327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:20.541 [2024-10-30 14:15:18.772099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e6300 00:28:20.541 [2024-10-30 14:15:18.772963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.541 [2024-10-30 14:15:18.772983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:20.541 [2024-10-30 14:15:18.780775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e5220 00:28:20.541 [2024-10-30 14:15:18.781666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.541 [2024-10-30 14:15:18.781683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:20.541 [2024-10-30 14:15:18.789441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e1f80 00:28:20.541 [2024-10-30 14:15:18.790343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.541 [2024-10-30 14:15:18.790360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:20.541 [2024-10-30 14:15:18.798126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e3060 00:28:20.541 [2024-10-30 14:15:18.799001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.541 [2024-10-30 14:15:18.799018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:20.541 [2024-10-30 14:15:18.806791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e4140 00:28:20.541 [2024-10-30 14:15:18.807678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.541 [2024-10-30 14:15:18.807696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:20.542 [2024-10-30 14:15:18.815432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f20d8 00:28:20.542 [2024-10-30 14:15:18.816313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.542 [2024-10-30 14:15:18.816330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:20.542 [2024-10-30 14:15:18.824100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f0ff8 00:28:20.542 [2024-10-30 14:15:18.824985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.542 [2024-10-30 14:15:18.825002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:20.542 [2024-10-30 14:15:18.832776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166eff18 00:28:20.542 [2024-10-30 14:15:18.833654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.542 [2024-10-30 14:15:18.833671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:20.803 [2024-10-30 14:15:18.841420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166eee38 00:28:20.803 [2024-10-30 14:15:18.842297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.803 [2024-10-30 14:15:18.842313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:20.803 [2024-10-30 14:15:18.850062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166edd58 00:28:20.803 [2024-10-30 14:15:18.850961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.803 [2024-10-30 14:15:18.850978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:20.803 [2024-10-30 14:15:18.858693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ecc78 00:28:20.803 [2024-10-30 14:15:18.859572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.803 [2024-10-30 14:15:18.859588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:20.803 [2024-10-30 14:15:18.867359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ebb98 00:28:20.803 [2024-10-30 14:15:18.868211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.803 [2024-10-30 14:15:18.868227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:20.803 [2024-10-30 14:15:18.875986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166eaab8 00:28:20.803 [2024-10-30 14:15:18.876850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.803 [2024-10-30 14:15:18.876866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:20.803 [2024-10-30 14:15:18.884609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e99d8 00:28:20.803 [2024-10-30 14:15:18.885494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.803 [2024-10-30 14:15:18.885511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:20.803 [2024-10-30 14:15:18.893225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e88f8 00:28:20.803 [2024-10-30 14:15:18.894095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.803 [2024-10-30 14:15:18.894112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:20.803 [2024-10-30 14:15:18.901848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e7818 00:28:20.803 [2024-10-30 14:15:18.902733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.803 [2024-10-30 14:15:18.902756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:20.803 [2024-10-30 14:15:18.910480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e6738 00:28:20.803 [2024-10-30 14:15:18.911370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.803 [2024-10-30 14:15:18.911387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:20.803 [2024-10-30 14:15:18.919114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e5658 00:28:20.803 [2024-10-30 14:15:18.919982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.803 [2024-10-30 14:15:18.919999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:20.803 [2024-10-30 14:15:18.927140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fc560 00:28:20.803 [2024-10-30 14:15:18.927988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.803 [2024-10-30 14:15:18.928004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:20.803 [2024-10-30 14:15:18.937055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e9e10 00:28:20.803 [2024-10-30 14:15:18.937831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.803 [2024-10-30 14:15:18.937848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.803 [2024-10-30 14:15:18.946270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f35f0 00:28:20.803 [2024-10-30 14:15:18.947493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.804 [2024-10-30 14:15:18.947509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.804 [2024-10-30 14:15:18.954914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fb048 00:28:20.804 [2024-10-30 14:15:18.956151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.804 [2024-10-30 14:15:18.956167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.804 [2024-10-30 14:15:18.963542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f5378 00:28:20.804 [2024-10-30 14:15:18.964730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.804 [2024-10-30 14:15:18.964750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.804 [2024-10-30 14:15:18.972174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ff3c8 00:28:20.804 [2024-10-30 14:15:18.973388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.804 [2024-10-30 14:15:18.973405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.804 [2024-10-30 14:15:18.980802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f35f0 00:28:20.804 [2024-10-30 14:15:18.982028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.804 [2024-10-30 14:15:18.982044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.804 [2024-10-30 14:15:18.989428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fb048 00:28:20.804 [2024-10-30 14:15:18.990656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.804 [2024-10-30 14:15:18.990672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.804 [2024-10-30 14:15:18.998079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f5378 00:28:20.804 [2024-10-30 14:15:18.999292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.804 [2024-10-30 14:15:18.999308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.804 [2024-10-30 14:15:19.006700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ff3c8 00:28:20.804 [2024-10-30 14:15:19.007906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.804 [2024-10-30 14:15:19.007923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.804 [2024-10-30 14:15:19.015321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f35f0 00:28:20.804 [2024-10-30 14:15:19.016537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.804 [2024-10-30 14:15:19.016554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.804 [2024-10-30 14:15:19.023937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fb048 00:28:20.804 [2024-10-30 14:15:19.025181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.804 [2024-10-30 14:15:19.025197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.804 [2024-10-30 14:15:19.033459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f5378 00:28:20.804 [2024-10-30 14:15:19.035064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.804 [2024-10-30 14:15:19.035080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.804 [2024-10-30 14:15:19.039610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f8a50 00:28:20.804 [2024-10-30 14:15:19.040402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.804 [2024-10-30 14:15:19.040419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:20.804 [2024-10-30 14:15:19.048406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f9b30 00:28:20.804 [2024-10-30 14:15:19.049207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.804 [2024-10-30 14:15:19.049223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:20.804 [2024-10-30 14:15:19.057030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fac10 00:28:20.804 [2024-10-30 14:15:19.057857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.804 [2024-10-30 14:15:19.057874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:20.804 [2024-10-30 14:15:19.065647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fbcf0 00:28:20.804 [2024-10-30 14:15:19.066454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.804 [2024-10-30 14:15:19.066470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:20.804 [2024-10-30 14:15:19.074262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e6b70 00:28:20.804 [2024-10-30 14:15:19.075065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.804 [2024-10-30 14:15:19.075081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:20.804 [2024-10-30 14:15:19.082872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e5a90 00:28:20.804 [2024-10-30 14:15:19.083682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.804 [2024-10-30 14:15:19.083699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:20.804 [2024-10-30 14:15:19.091580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e1710 00:28:20.804 [2024-10-30 14:15:19.092374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.804 [2024-10-30 14:15:19.092392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:20.804 [2024-10-30 14:15:19.100223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e27f0 00:28:20.804 [2024-10-30 14:15:19.100998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.804 [2024-10-30 14:15:19.101015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:21.065 [2024-10-30 14:15:19.108848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f0bc0 00:28:21.065 [2024-10-30 14:15:19.109663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.065 [2024-10-30 14:15:19.109680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:21.065 [2024-10-30 14:15:19.117471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f1ca0 00:28:21.065 [2024-10-30 14:15:19.118269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.065 [2024-10-30 14:15:19.118286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:21.065 [2024-10-30 14:15:19.126110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e4de8 00:28:21.065 [2024-10-30 14:15:19.126925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.126945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.134721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e3d08 00:28:21.066 [2024-10-30 14:15:19.135523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.135540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.143365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fda78 00:28:21.066 [2024-10-30 14:15:19.144160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.144176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.151985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ed0b0 00:28:21.066 [2024-10-30 14:15:19.152777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.152793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.160596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ebfd0 00:28:21.066 [2024-10-30 14:15:19.161406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.161423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.169198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166eaef0 00:28:21.066 [2024-10-30 14:15:19.170010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.170028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.177802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f8618 00:28:21.066 [2024-10-30 14:15:19.178556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.178572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.186730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166df118 00:28:21.066 [2024-10-30 14:15:19.187659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.187676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.196804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e6b70 00:28:21.066 [2024-10-30 14:15:19.198196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.198212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.204418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f6020 00:28:21.066 [2024-10-30 14:15:19.205284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.205303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.212958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ea680 00:28:21.066 [2024-10-30 14:15:19.213736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.213758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.221604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f7970 00:28:21.066 [2024-10-30 14:15:19.222422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.222439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.230895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fef90 00:28:21.066 [2024-10-30 14:15:19.232062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.232078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.239698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f2510 00:28:21.066 [2024-10-30 14:15:19.240892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.240909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.248341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f4298 00:28:21.066 [2024-10-30 14:15:19.249483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.249500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.256989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e95a0 00:28:21.066 [2024-10-30 14:15:19.258188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.258205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.265615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ec408 00:28:21.066 [2024-10-30 14:15:19.266814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.266831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.274286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166edd58 00:28:21.066 [2024-10-30 14:15:19.275465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.275482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.282952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fef90 00:28:21.066 [2024-10-30 14:15:19.284158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.284175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.291620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f2510 00:28:21.066 [2024-10-30 14:15:19.292814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.292830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.300266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f4298 00:28:21.066 [2024-10-30 14:15:19.301408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.301424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.308904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e95a0 00:28:21.066 [2024-10-30 14:15:19.310103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.310120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.317529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ec408 00:28:21.066 [2024-10-30 14:15:19.318713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.318730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.326167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166edd58 00:28:21.066 [2024-10-30 14:15:19.327354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.327372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.334830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fef90 00:28:21.066 [2024-10-30 14:15:19.335995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.336012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.343479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f2510 00:28:21.066 [2024-10-30 14:15:19.344669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.344685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.352154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f4298 00:28:21.066 [2024-10-30 14:15:19.353317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.353333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.066 [2024-10-30 14:15:19.360781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e95a0 00:28:21.066 [2024-10-30 14:15:19.361956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.066 [2024-10-30 14:15:19.361973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.328 [2024-10-30 14:15:19.369403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ec408 00:28:21.328 [2024-10-30 14:15:19.370599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.328 [2024-10-30 14:15:19.370616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.328 [2024-10-30 14:15:19.378051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166edd58 00:28:21.328 [2024-10-30 14:15:19.379231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.328 [2024-10-30 14:15:19.379247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.328 [2024-10-30 14:15:19.386689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fef90 00:28:21.328 [2024-10-30 14:15:19.387877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.328 [2024-10-30 14:15:19.387894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.328 [2024-10-30 14:15:19.395310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f2510 00:28:21.328 [2024-10-30 14:15:19.396496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.328 [2024-10-30 14:15:19.396512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.328 [2024-10-30 14:15:19.403933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f4298 00:28:21.328 [2024-10-30 14:15:19.405074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.328 [2024-10-30 14:15:19.405090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.328 [2024-10-30 14:15:19.412563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e95a0 00:28:21.328 [2024-10-30 14:15:19.413754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.328 [2024-10-30 14:15:19.413770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.328 [2024-10-30 14:15:19.421228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ec408 00:28:21.328 [2024-10-30 14:15:19.422406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.328 [2024-10-30 14:15:19.422423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.328 [2024-10-30 14:15:19.429883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166edd58 00:28:21.328 [2024-10-30 14:15:19.431089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.328 [2024-10-30 14:15:19.431108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.328 [2024-10-30 14:15:19.438546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fef90 00:28:21.328 [2024-10-30 14:15:19.439748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.328 [2024-10-30 14:15:19.439764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.328 [2024-10-30 14:15:19.447179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f2510 00:28:21.328 [2024-10-30 14:15:19.448360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.328 [2024-10-30 14:15:19.448377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.328 [2024-10-30 14:15:19.455826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f4298 00:28:21.328 [2024-10-30 14:15:19.456997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.328 [2024-10-30 14:15:19.457014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.328 [2024-10-30 14:15:19.464457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e95a0 00:28:21.329 [2024-10-30 14:15:19.465643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.329 [2024-10-30 14:15:19.465660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.329 [2024-10-30 14:15:19.473091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ec408 00:28:21.329 [2024-10-30 14:15:19.474236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.329 [2024-10-30 14:15:19.474252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.329 [2024-10-30 14:15:19.481730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166edd58 00:28:21.329 [2024-10-30 14:15:19.482920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.329 [2024-10-30 14:15:19.482937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.329 [2024-10-30 14:15:19.490357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fef90 00:28:21.329 [2024-10-30 14:15:19.491553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.329 [2024-10-30 14:15:19.491570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.329 [2024-10-30 14:15:19.498978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f2510 00:28:21.329 [2024-10-30 14:15:19.500162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.329 [2024-10-30 14:15:19.500179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.329 [2024-10-30 14:15:19.507606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f4298 00:28:21.329 [2024-10-30 14:15:19.508760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.329 [2024-10-30 14:15:19.508777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.329 [2024-10-30 14:15:19.516276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e95a0 00:28:21.329 [2024-10-30 14:15:19.517472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.329 [2024-10-30 14:15:19.517488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.329 [2024-10-30 14:15:19.524913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ec408 00:28:21.329 [2024-10-30 14:15:19.526112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.329 [2024-10-30 14:15:19.526128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.329 [2024-10-30 14:15:19.533542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166edd58 00:28:21.329 [2024-10-30 14:15:19.534739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.329 [2024-10-30 14:15:19.534758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.329 [2024-10-30 14:15:19.542180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fef90 00:28:21.329 [2024-10-30 14:15:19.543383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.329 [2024-10-30 14:15:19.543400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.329 [2024-10-30 14:15:19.550805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f2510 00:28:21.329 [2024-10-30 14:15:19.552008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.329 [2024-10-30 14:15:19.552025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.329 [2024-10-30 14:15:19.559434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f4298 00:28:21.329 [2024-10-30 14:15:19.560614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.329 [2024-10-30 14:15:19.560631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.329 [2024-10-30 14:15:19.568079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e95a0 00:28:21.329 [2024-10-30 14:15:19.569281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.329 [2024-10-30 14:15:19.569298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.329 [2024-10-30 14:15:19.576734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ec408 00:28:21.329 [2024-10-30 14:15:19.577942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.329 [2024-10-30 14:15:19.577958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.329 [2024-10-30 14:15:19.585398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166edd58 00:28:21.329 [2024-10-30 14:15:19.586588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.329 [2024-10-30 14:15:19.586604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.329 [2024-10-30 14:15:19.594033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fef90 00:28:21.329 [2024-10-30 14:15:19.595238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.329 [2024-10-30 14:15:19.595255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.329 [2024-10-30 14:15:19.602694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f2510 00:28:21.329 [2024-10-30 14:15:19.603882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.329 [2024-10-30 14:15:19.603898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.329 [2024-10-30 14:15:19.611361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f4298 00:28:21.329 [2024-10-30 14:15:19.612518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.329 [2024-10-30 14:15:19.612535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.329 [2024-10-30 14:15:19.619996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e95a0 00:28:21.329 [2024-10-30 14:15:19.621200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.329 [2024-10-30 14:15:19.621216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.589 [2024-10-30 14:15:19.628640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ec408 00:28:21.589 [2024-10-30 14:15:19.629811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.589 [2024-10-30 14:15:19.629827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.589 [2024-10-30 14:15:19.637271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f8a50 00:28:21.589 [2024-10-30 14:15:19.638449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.589 [2024-10-30 14:15:19.638465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:21.589 [2024-10-30 14:15:19.646085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f46d0 00:28:21.589 [2024-10-30 14:15:19.647276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.589 [2024-10-30 14:15:19.647293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:21.589 [2024-10-30 14:15:19.654704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ebfd0 00:28:21.589 [2024-10-30 14:15:19.655874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.589 [2024-10-30 14:15:19.655893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:21.589 [2024-10-30 14:15:19.663364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f8a50 00:28:21.589 [2024-10-30 14:15:19.664538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.589 [2024-10-30 14:15:19.664554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:21.589 [2024-10-30 14:15:19.670987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f20d8 00:28:21.589 [2024-10-30 14:15:19.671865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.589 [2024-10-30 14:15:19.671881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:21.589 [2024-10-30 14:15:19.679872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f3e60 00:28:21.589 [2024-10-30 14:15:19.680733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.589 [2024-10-30 14:15:19.680754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.589 [2024-10-30 14:15:19.688520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e6b70 00:28:21.589 [2024-10-30 14:15:19.689384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.589 [2024-10-30 14:15:19.689400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.589 [2024-10-30 14:15:19.697191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166eaef0 00:28:21.589 [2024-10-30 14:15:19.698065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.590 [2024-10-30 14:15:19.698081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.590 [2024-10-30 14:15:19.705857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f46d0 00:28:21.590 [2024-10-30 14:15:19.706726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.590 [2024-10-30 14:15:19.706743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.590 [2024-10-30 14:15:19.714503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e6300 00:28:21.590 [2024-10-30 14:15:19.715370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.590 [2024-10-30 14:15:19.715387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.590 [2024-10-30 14:15:19.723176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f20d8 00:28:21.590 [2024-10-30 14:15:19.724042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.590 [2024-10-30 14:15:19.724059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.590 [2024-10-30 14:15:19.732024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f3e60 00:28:21.590 [2024-10-30 14:15:19.732882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.590 [2024-10-30 14:15:19.732899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.590 [2024-10-30 14:15:19.740668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e6b70 00:28:21.590 [2024-10-30 14:15:19.741526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.590 [2024-10-30 14:15:19.741543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.590 [2024-10-30 14:15:19.749332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166eaef0 00:28:21.590 [2024-10-30 14:15:19.750194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.590 [2024-10-30 14:15:19.750211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.590 29482.00 IOPS, 115.16 MiB/s [2024-10-30T13:15:19.889Z] [2024-10-30 14:15:19.758028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f46d0 00:28:21.590 [2024-10-30 14:15:19.758892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.590 [2024-10-30 14:15:19.758908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.590 [2024-10-30 14:15:19.766700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e6300 00:28:21.590 [2024-10-30 14:15:19.767558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.590 [2024-10-30 14:15:19.767575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.590 [2024-10-30 14:15:19.775359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f20d8 00:28:21.590 [2024-10-30 14:15:19.776195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.590 [2024-10-30 14:15:19.776212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.590 [2024-10-30 14:15:19.784016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f3e60 00:28:21.590 [2024-10-30 14:15:19.784835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.590 [2024-10-30 14:15:19.784852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.590 [2024-10-30 14:15:19.792668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e6b70 00:28:21.590 [2024-10-30 14:15:19.793533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.590 [2024-10-30 14:15:19.793549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.590 [2024-10-30 14:15:19.801346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166eaef0 00:28:21.590 [2024-10-30 14:15:19.802170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.590 [2024-10-30 14:15:19.802187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.590 [2024-10-30 14:15:19.810026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f46d0 00:28:21.590 [2024-10-30 14:15:19.810879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.590 [2024-10-30 14:15:19.810896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.590 [2024-10-30 14:15:19.818673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e6300 00:28:21.590 [2024-10-30 14:15:19.819534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.590 [2024-10-30 14:15:19.819550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.590 [2024-10-30 14:15:19.827301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f20d8 00:28:21.590 [2024-10-30 14:15:19.828170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.590 [2024-10-30 14:15:19.828186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.590 [2024-10-30 14:15:19.835933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f3e60 00:28:21.590 [2024-10-30 14:15:19.836790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.590 [2024-10-30 14:15:19.836807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.590 [2024-10-30 14:15:19.844589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e6b70 00:28:21.590 [2024-10-30 14:15:19.845459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.590 [2024-10-30 14:15:19.845476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.590 [2024-10-30 14:15:19.853252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166eaef0 00:28:21.590 [2024-10-30 14:15:19.854111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.590 [2024-10-30 14:15:19.854127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.590 [2024-10-30 14:15:19.861942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f46d0 00:28:21.590 [2024-10-30 14:15:19.862805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.590 [2024-10-30 14:15:19.862822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.590 [2024-10-30 14:15:19.870612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e6300 00:28:21.590 [2024-10-30 14:15:19.871475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.590 [2024-10-30 14:15:19.871492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.590 [2024-10-30 14:15:19.879271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f20d8 00:28:21.590 [2024-10-30 14:15:19.880142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.590 [2024-10-30 14:15:19.880161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.590 [2024-10-30 14:15:19.887918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f3e60 00:28:21.851 [2024-10-30 14:15:19.888780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.851 [2024-10-30 14:15:19.888796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.851 [2024-10-30 14:15:19.896560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e6b70 00:28:21.851 [2024-10-30 14:15:19.897383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.851 [2024-10-30 14:15:19.897399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.851 [2024-10-30 14:15:19.905212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166eaef0 00:28:21.851 [2024-10-30 14:15:19.906076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.851 [2024-10-30 14:15:19.906092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.851 [2024-10-30 14:15:19.913860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f46d0 00:28:21.851 [2024-10-30 14:15:19.914720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.851 [2024-10-30 14:15:19.914736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.851 [2024-10-30 14:15:19.922491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e6300 00:28:21.851 [2024-10-30 14:15:19.923360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.851 [2024-10-30 14:15:19.923376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.851 [2024-10-30 14:15:19.931172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f20d8 00:28:21.851 [2024-10-30 14:15:19.932045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.851 [2024-10-30 14:15:19.932061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.851 [2024-10-30 14:15:19.939853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f3e60 00:28:21.851 [2024-10-30 14:15:19.940725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.851 [2024-10-30 14:15:19.940741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.851 [2024-10-30 14:15:19.948502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e6b70 00:28:21.851 [2024-10-30 14:15:19.949375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.851 [2024-10-30 14:15:19.949392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.851 [2024-10-30 14:15:19.957168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166eaef0 00:28:21.851 [2024-10-30 14:15:19.958053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.851 [2024-10-30 14:15:19.958069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.851 [2024-10-30 14:15:19.965804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f46d0 00:28:21.851 [2024-10-30 14:15:19.966629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.851 [2024-10-30 14:15:19.966645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.851 [2024-10-30 14:15:19.974464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e6300 00:28:21.851 [2024-10-30 14:15:19.975329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.851 [2024-10-30 14:15:19.975345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.851 [2024-10-30 14:15:19.983091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f20d8 00:28:21.851 [2024-10-30 14:15:19.983956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.851 [2024-10-30 14:15:19.983972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.851 [2024-10-30 14:15:19.991744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f3e60 00:28:21.851 [2024-10-30 14:15:19.992610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.851 [2024-10-30 14:15:19.992626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.851 [2024-10-30 14:15:20.000413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e6b70 00:28:21.851 [2024-10-30 14:15:20.001270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.851 [2024-10-30 14:15:20.001286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.851 [2024-10-30 14:15:20.009643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166eaef0 00:28:21.851 [2024-10-30 14:15:20.010514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.851 [2024-10-30 14:15:20.010531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.852 [2024-10-30 14:15:20.018270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f46d0 00:28:21.852 [2024-10-30 14:15:20.019013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.852 [2024-10-30 14:15:20.019029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.852 [2024-10-30 14:15:20.026995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e6300 00:28:21.852 [2024-10-30 14:15:20.027862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.852 [2024-10-30 14:15:20.027878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.852 [2024-10-30 14:15:20.035655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f20d8 00:28:21.852 [2024-10-30 14:15:20.036483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.852 [2024-10-30 14:15:20.036499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.852 [2024-10-30 14:15:20.044318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f3e60 00:28:21.852 [2024-10-30 14:15:20.045181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.852 [2024-10-30 14:15:20.045199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.852 [2024-10-30 14:15:20.052993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e6b70 00:28:21.852 [2024-10-30 14:15:20.053852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.852 [2024-10-30 14:15:20.053869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.852 [2024-10-30 14:15:20.061638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166eaef0 00:28:21.852 [2024-10-30 14:15:20.062508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.852 [2024-10-30 14:15:20.062525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.852 [2024-10-30 14:15:20.070293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f46d0 00:28:21.852 [2024-10-30 14:15:20.071153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.852 [2024-10-30 14:15:20.071170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.852 [2024-10-30 14:15:20.078957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e6300 00:28:21.852 [2024-10-30 14:15:20.079814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.852 [2024-10-30 14:15:20.079830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.852 [2024-10-30 14:15:20.087685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f20d8 00:28:21.852 [2024-10-30 14:15:20.088558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.852 [2024-10-30 14:15:20.088574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.852 [2024-10-30 14:15:20.096362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f3e60 00:28:21.852 [2024-10-30 14:15:20.097101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.852 [2024-10-30 14:15:20.097118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.852 [2024-10-30 14:15:20.106255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e84c0 00:28:21.852 [2024-10-30 14:15:20.107487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.852 [2024-10-30 14:15:20.107506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:21.852 [2024-10-30 14:15:20.114649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f1868 00:28:21.852 [2024-10-30 14:15:20.115957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.852 [2024-10-30 14:15:20.115973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:21.852 [2024-10-30 14:15:20.122809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e3060 00:28:21.852 [2024-10-30 14:15:20.123678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.852 [2024-10-30 14:15:20.123694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:21.852 [2024-10-30 14:15:20.131209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e38d0 00:28:21.852 [2024-10-30 14:15:20.132110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.852 [2024-10-30 14:15:20.132127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:21.852 [2024-10-30 14:15:20.139969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f5be8 00:28:21.852 [2024-10-30 14:15:20.140925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.852 [2024-10-30 14:15:20.140941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:21.852 [2024-10-30 14:15:20.149043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fb8b8 00:28:21.852 [2024-10-30 14:15:20.150150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.852 [2024-10-30 14:15:20.150166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:22.113 [2024-10-30 14:15:20.156653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166dece0 00:28:22.113 [2024-10-30 14:15:20.157373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.113 [2024-10-30 14:15:20.157389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:22.113 [2024-10-30 14:15:20.164601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f6458 00:28:22.113 [2024-10-30 14:15:20.165244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.113 [2024-10-30 14:15:20.165260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:22.113 [2024-10-30 14:15:20.175215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e5658 00:28:22.113 [2024-10-30 14:15:20.176324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.113 [2024-10-30 14:15:20.176341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:22.113 [2024-10-30 14:15:20.182224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ea680 00:28:22.113 [2024-10-30 14:15:20.182715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.113 [2024-10-30 14:15:20.182733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:22.113 [2024-10-30 14:15:20.190861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f0788 00:28:22.113 [2024-10-30 14:15:20.191354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.113 [2024-10-30 14:15:20.191370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:22.113 [2024-10-30 14:15:20.200743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e3498 00:28:22.113 [2024-10-30 14:15:20.201779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.113 [2024-10-30 14:15:20.201796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:22.113 [2024-10-30 14:15:20.208617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166dece0 00:28:22.113 [2024-10-30 14:15:20.209380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.113 [2024-10-30 14:15:20.209396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:22.113 [2024-10-30 14:15:20.216712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f6cc8 00:28:22.114 [2024-10-30 14:15:20.217536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-10-30 14:15:20.217552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:22.114 [2024-10-30 14:15:20.225123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e6738 00:28:22.114 [2024-10-30 14:15:20.225808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-10-30 14:15:20.225824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:22.114 [2024-10-30 14:15:20.234773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166eb760 00:28:22.114 [2024-10-30 14:15:20.235576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-10-30 14:15:20.235592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:22.114 [2024-10-30 14:15:20.242929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f0ff8 00:28:22.114 [2024-10-30 14:15:20.243598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-10-30 14:15:20.243615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.114 [2024-10-30 14:15:20.251174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e2c28 00:28:22.114 [2024-10-30 14:15:20.251737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-10-30 14:15:20.251758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:22.114 [2024-10-30 14:15:20.259623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ff3c8 00:28:22.114 [2024-10-30 14:15:20.260183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-10-30 14:15:20.260200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:22.114 [2024-10-30 14:15:20.268817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e95a0 00:28:22.114 [2024-10-30 14:15:20.269486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-10-30 14:15:20.269502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.114 [2024-10-30 14:15:20.277590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fda78 00:28:22.114 [2024-10-30 14:15:20.278424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-10-30 14:15:20.278441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:22.114 [2024-10-30 14:15:20.286236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166de8a8 00:28:22.114 [2024-10-30 14:15:20.287051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-10-30 14:15:20.287069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:22.114 [2024-10-30 14:15:20.294877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fa7d8 00:28:22.114 [2024-10-30 14:15:20.295702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-10-30 14:15:20.295718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:22.114 [2024-10-30 14:15:20.303530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f6cc8 00:28:22.114 [2024-10-30 14:15:20.304252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-10-30 14:15:20.304269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:22.114 [2024-10-30 14:15:20.312166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e6300 00:28:22.114 [2024-10-30 14:15:20.312951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-10-30 14:15:20.312968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:22.114 [2024-10-30 14:15:20.320810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fe2e8 00:28:22.114 [2024-10-30 14:15:20.321635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-10-30 14:15:20.321652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:22.114 [2024-10-30 14:15:20.328847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e88f8 00:28:22.114 [2024-10-30 14:15:20.329633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-10-30 14:15:20.329649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:22.114 [2024-10-30 14:15:20.338657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166eee38 00:28:22.114 [2024-10-30 14:15:20.339484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-10-30 14:15:20.339500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:22.114 [2024-10-30 14:15:20.347334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fa3a0 00:28:22.114 [2024-10-30 14:15:20.348183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-10-30 14:15:20.348199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:22.114 [2024-10-30 14:15:20.355970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e0630 00:28:22.114 [2024-10-30 14:15:20.356787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-10-30 14:15:20.356805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:22.114 [2024-10-30 14:15:20.364636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166eee38 00:28:22.114 [2024-10-30 14:15:20.365467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-10-30 14:15:20.365484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:22.114 [2024-10-30 14:15:20.373310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fa3a0 00:28:22.114 [2024-10-30 14:15:20.374133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-10-30 14:15:20.374150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:22.114 [2024-10-30 14:15:20.381993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e0630 00:28:22.114 [2024-10-30 14:15:20.382816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-10-30 14:15:20.382833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:22.114 [2024-10-30 14:15:20.390618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166eee38 00:28:22.114 [2024-10-30 14:15:20.391454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-10-30 14:15:20.391471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:22.114 [2024-10-30 14:15:20.399260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fa3a0 00:28:22.114 [2024-10-30 14:15:20.400114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-10-30 14:15:20.400130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:22.114 [2024-10-30 14:15:20.407885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e0630 00:28:22.114 [2024-10-30 14:15:20.408710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-10-30 14:15:20.408730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:22.382 [2024-10-30 14:15:20.416366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fa7d8 00:28:22.382 [2024-10-30 14:15:20.417019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.382 [2024-10-30 14:15:20.417035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:22.382 [2024-10-30 14:15:20.426408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e84c0 00:28:22.382 [2024-10-30 14:15:20.427873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.382 [2024-10-30 14:15:20.427889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:22.382 [2024-10-30 14:15:20.433755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f9f68 00:28:22.382 [2024-10-30 14:15:20.434481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.382 [2024-10-30 14:15:20.434497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:22.382 [2024-10-30 14:15:20.442683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f5be8 00:28:22.382 [2024-10-30 14:15:20.443465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.382 [2024-10-30 14:15:20.443482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:22.382 [2024-10-30 14:15:20.451216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f4b08 00:28:22.382 [2024-10-30 14:15:20.451767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.382 [2024-10-30 14:15:20.451784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:22.382 [2024-10-30 14:15:20.459999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e99d8 00:28:22.382 [2024-10-30 14:15:20.460758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.382 [2024-10-30 14:15:20.460775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:22.382 [2024-10-30 14:15:20.468644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fd640 00:28:22.382 [2024-10-30 14:15:20.469403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.382 [2024-10-30 14:15:20.469420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:22.382 [2024-10-30 14:15:20.477913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e5220 00:28:22.382 [2024-10-30 14:15:20.479023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.382 [2024-10-30 14:15:20.479040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:22.382 [2024-10-30 14:15:20.485237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e7818 00:28:22.382 [2024-10-30 14:15:20.485986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.382 [2024-10-30 14:15:20.486002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:22.382 [2024-10-30 14:15:20.493867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ddc00 00:28:22.382 [2024-10-30 14:15:20.494619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.382 [2024-10-30 14:15:20.494634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:22.382 [2024-10-30 14:15:20.502571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ea248 00:28:22.382 [2024-10-30 14:15:20.503295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.382 [2024-10-30 14:15:20.503311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:22.382 [2024-10-30 14:15:20.511209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f31b8 00:28:22.382 [2024-10-30 14:15:20.511946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.382 [2024-10-30 14:15:20.511963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:22.382 [2024-10-30 14:15:20.519866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e7818 00:28:22.382 [2024-10-30 14:15:20.520618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.382 [2024-10-30 14:15:20.520634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:22.382 [2024-10-30 14:15:20.528516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ddc00 00:28:22.382 [2024-10-30 14:15:20.529235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.382 [2024-10-30 14:15:20.529251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:22.382 [2024-10-30 14:15:20.537345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ef6a8 00:28:22.382 [2024-10-30 14:15:20.538072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.382 [2024-10-30 14:15:20.538089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.382 [2024-10-30 14:15:20.546011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f1868 00:28:22.382 [2024-10-30 14:15:20.546777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.382 [2024-10-30 14:15:20.546794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.382 [2024-10-30 14:15:20.554652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fb048 00:28:22.382 [2024-10-30 14:15:20.555388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.382 [2024-10-30 14:15:20.555404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.382 [2024-10-30 14:15:20.563299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ebb98 00:28:22.382 [2024-10-30 14:15:20.564038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.382 [2024-10-30 14:15:20.564055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.382 [2024-10-30 14:15:20.571952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166eaab8 00:28:22.382 [2024-10-30 14:15:20.572677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.382 [2024-10-30 14:15:20.572693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.382 [2024-10-30 14:15:20.580615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e95a0 00:28:22.382 [2024-10-30 14:15:20.581302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.382 [2024-10-30 14:15:20.581318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.382 [2024-10-30 14:15:20.589277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ddc00 00:28:22.382 [2024-10-30 14:15:20.590018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.382 [2024-10-30 14:15:20.590035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.382 [2024-10-30 14:15:20.597931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ef6a8 00:28:22.382 [2024-10-30 14:15:20.598660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.382 [2024-10-30 14:15:20.598677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.382 [2024-10-30 14:15:20.606626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f1868 00:28:22.382 [2024-10-30 14:15:20.607305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.382 [2024-10-30 14:15:20.607321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.382 [2024-10-30 14:15:20.615288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fb048 00:28:22.382 [2024-10-30 14:15:20.616003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.382 [2024-10-30 14:15:20.616020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.383 [2024-10-30 14:15:20.623957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ebb98 00:28:22.383 [2024-10-30 14:15:20.624690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.383 [2024-10-30 14:15:20.624706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.383 [2024-10-30 14:15:20.632608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166eaab8 00:28:22.383 [2024-10-30 14:15:20.633333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.383 [2024-10-30 14:15:20.633353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.383 [2024-10-30 14:15:20.641355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e95a0 00:28:22.383 [2024-10-30 14:15:20.642064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.383 [2024-10-30 14:15:20.642081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.383 [2024-10-30 14:15:20.650049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ddc00 00:28:22.383 [2024-10-30 14:15:20.650765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.383 [2024-10-30 14:15:20.650782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.383 [2024-10-30 14:15:20.658715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ef6a8 00:28:22.383 [2024-10-30 14:15:20.659446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.383 [2024-10-30 14:15:20.659463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.383 [2024-10-30 14:15:20.667367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f1868 00:28:22.383 [2024-10-30 14:15:20.668097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.383 [2024-10-30 14:15:20.668114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.762 [2024-10-30 14:15:20.676058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fb048 00:28:22.762 [2024-10-30 14:15:20.676782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.762 [2024-10-30 14:15:20.676799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.762 [2024-10-30 14:15:20.684684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ebb98 00:28:22.762 [2024-10-30 14:15:20.685421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.762 [2024-10-30 14:15:20.685437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.762 [2024-10-30 14:15:20.693350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166eaab8 00:28:22.762 [2024-10-30 14:15:20.694044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.762 [2024-10-30 14:15:20.694060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.762 [2024-10-30 14:15:20.702048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166e95a0 00:28:22.762 [2024-10-30 14:15:20.702781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.763 [2024-10-30 14:15:20.702797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.763 [2024-10-30 14:15:20.710725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ddc00 00:28:22.763 [2024-10-30 14:15:20.711456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.763 [2024-10-30 14:15:20.711473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.763 [2024-10-30 14:15:20.719382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ef6a8 00:28:22.763 [2024-10-30 14:15:20.720099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.763 [2024-10-30 14:15:20.720115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.763 [2024-10-30 14:15:20.728197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166f1868 00:28:22.763 [2024-10-30 14:15:20.728936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.763 [2024-10-30 14:15:20.728952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.763 [2024-10-30 14:15:20.736879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166fb048 00:28:22.763 [2024-10-30 14:15:20.737594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.763 [2024-10-30 14:15:20.737611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.763 [2024-10-30 14:15:20.745567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166ebb98 00:28:22.763 [2024-10-30 14:15:20.746290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.763 [2024-10-30 14:15:20.746307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.763 [2024-10-30 14:15:20.754241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc277e0) with pdu=0x2000166eaab8 00:28:22.763 [2024-10-30 14:15:20.754983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.763 [2024-10-30 14:15:20.754999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.763 29491.00 IOPS, 115.20 MiB/s 00:28:22.763 Latency(us) 00:28:22.763 [2024-10-30T13:15:21.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.763 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:22.763 nvme0n1 : 2.00 29486.71 115.18 0.00 0.00 4335.16 1843.20 10868.05 00:28:22.763 [2024-10-30T13:15:21.062Z] =================================================================================================================== 00:28:22.763 [2024-10-30T13:15:21.062Z] Total : 29486.71 115.18 0.00 0.00 4335.16 1843.20 10868.05 00:28:22.763 { 00:28:22.763 "results": [ 00:28:22.763 { 00:28:22.763 "job": "nvme0n1", 00:28:22.763 "core_mask": "0x2", 00:28:22.763 "workload": "randwrite", 00:28:22.763 "status": "finished", 00:28:22.763 "queue_depth": 128, 00:28:22.763 "io_size": 4096, 00:28:22.763 "runtime": 2.004632, 00:28:22.763 "iops": 29486.708782459824, 00:28:22.763 "mibps": 115.18245618148369, 00:28:22.763 "io_failed": 0, 00:28:22.763 "io_timeout": 0, 00:28:22.763 "avg_latency_us": 4335.163622173349, 00:28:22.763 "min_latency_us": 1843.2, 00:28:22.763 "max_latency_us": 10868.053333333333 00:28:22.763 } 00:28:22.763 ], 00:28:22.763 "core_count": 1 00:28:22.763 } 00:28:22.763 14:15:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:22.763 14:15:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:22.763 14:15:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:22.763 | .driver_specific 00:28:22.763 | .nvme_error 00:28:22.763 | .status_code 00:28:22.763 | .command_transient_transport_error' 00:28:22.763 14:15:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:22.763 14:15:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 231 > 0 )) 00:28:22.763 14:15:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1197102 00:28:22.763 14:15:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1197102 ']' 00:28:22.763 14:15:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1197102 00:28:22.763 14:15:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:22.763 14:15:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:22.763 14:15:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1197102 00:28:22.763 14:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:22.763 14:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:22.763 14:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1197102' 00:28:22.763 killing process with pid 1197102 00:28:22.763 14:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1197102 00:28:22.763 Received shutdown signal, test time was about 2.000000 seconds 00:28:22.763 00:28:22.763 Latency(us) 00:28:22.763 [2024-10-30T13:15:21.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.763 [2024-10-30T13:15:21.062Z] =================================================================================================================== 00:28:22.763 [2024-10-30T13:15:21.062Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:22.763 14:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1197102 00:28:23.082 14:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:23.082 14:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:23.082 14:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:23.082 14:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:23.082 14:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:23.082 14:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1197889 00:28:23.082 14:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1197889 /var/tmp/bperf.sock 00:28:23.082 14:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1197889 ']' 00:28:23.082 14:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:23.082 14:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:23.082 14:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:23.082 14:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:23.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:23.082 14:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:23.082 14:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:23.082 [2024-10-30 14:15:21.169229] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:28:23.082 [2024-10-30 14:15:21.169288] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197889 ] 00:28:23.082 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:23.082 Zero copy mechanism will not be used. 00:28:23.082 [2024-10-30 14:15:21.251682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.082 [2024-10-30 14:15:21.281580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.672 14:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:23.672 14:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:23.672 14:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:23.672 14:15:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:23.934 14:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:23.934 14:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.934 14:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:23.934 14:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.934 14:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:23.934 14:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:24.196 nvme0n1 00:28:24.196 14:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:24.196 14:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.196 14:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:24.196 14:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.196 14:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:24.196 14:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:24.459 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:24.459 Zero copy mechanism will not be used. 00:28:24.459 Running I/O for 2 seconds... 00:28:24.459 [2024-10-30 14:15:22.561281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.561496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.561522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.566394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.566643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.566663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.576000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.576185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.576204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.580092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.580316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.580334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.584244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.584425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.584443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.588449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.588630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.588646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.592559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.592739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.592762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.596576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.596758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.596775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.600529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.600709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.600726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.604537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.604716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.604732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.608587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.608772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.608792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.612597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.612784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.612800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.616589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.616773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.616789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.620541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.620721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.620738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.626368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.626530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.626546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.629305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.629467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.629483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.633704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.633874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.633890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.642776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.642938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.642955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.649510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.649794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.649811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.654246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.654414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.654430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.659338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.659499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.659515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.666619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.666968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.666985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.673064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.673324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.673342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.678680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.678848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.678865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.682462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.682624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.682640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.687836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.687993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.688010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.691750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.691916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.691933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.698903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.699061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.699078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.703588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.703758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.703775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.707102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.707264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.707280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.710656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.710823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.710840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.714732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.714903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.714919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.718137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.718300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.718316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.721872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.722033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.722050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.725455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.725618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.725635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.729254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.729433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.729450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.732437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.732593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.732612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.735526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.735690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.735706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.739950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.740111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.740127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.744007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.744173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.744189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.747824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.747987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.748004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.750980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.751144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.751160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.459 [2024-10-30 14:15:22.754369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.459 [2024-10-30 14:15:22.754590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.459 [2024-10-30 14:15:22.754606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.722 [2024-10-30 14:15:22.762191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.722 [2024-10-30 14:15:22.762352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.722 [2024-10-30 14:15:22.762369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.722 [2024-10-30 14:15:22.765547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.722 [2024-10-30 14:15:22.765707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.722 [2024-10-30 14:15:22.765724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.722 [2024-10-30 14:15:22.771598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.722 [2024-10-30 14:15:22.771956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.722 [2024-10-30 14:15:22.771973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.722 [2024-10-30 14:15:22.779351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.722 [2024-10-30 14:15:22.779406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.722 [2024-10-30 14:15:22.779422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.722 [2024-10-30 14:15:22.785308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.722 [2024-10-30 14:15:22.785587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.722 [2024-10-30 14:15:22.785604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.722 [2024-10-30 14:15:22.789703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.722 [2024-10-30 14:15:22.789755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.722 [2024-10-30 14:15:22.789771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.722 [2024-10-30 14:15:22.793296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.722 [2024-10-30 14:15:22.793343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.722 [2024-10-30 14:15:22.793358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.722 [2024-10-30 14:15:22.798603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.723 [2024-10-30 14:15:22.798648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.723 [2024-10-30 14:15:22.798663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.723 [2024-10-30 14:15:22.802990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.723 [2024-10-30 14:15:22.803041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.723 [2024-10-30 14:15:22.803056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.723 [2024-10-30 14:15:22.809101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.723 [2024-10-30 14:15:22.809150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.723 [2024-10-30 14:15:22.809165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.723 [2024-10-30 14:15:22.815541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.723 [2024-10-30 14:15:22.815588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.723 [2024-10-30 14:15:22.815603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.723 [2024-10-30 14:15:22.819557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.723 [2024-10-30 14:15:22.819613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.723 [2024-10-30 14:15:22.819628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.723 [2024-10-30 14:15:22.827750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.723 [2024-10-30 14:15:22.827810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.723 [2024-10-30 14:15:22.827825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.723 [2024-10-30 14:15:22.832423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.723 [2024-10-30 14:15:22.832467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.723 [2024-10-30 14:15:22.832482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.723 [2024-10-30 14:15:22.836407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.723 [2024-10-30 14:15:22.836465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.723 [2024-10-30 14:15:22.836480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.723 [2024-10-30 14:15:22.842651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.723 [2024-10-30 14:15:22.842714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.723 [2024-10-30 14:15:22.842729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.723 [2024-10-30 14:15:22.846270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.723 [2024-10-30 14:15:22.846327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.723 [2024-10-30 14:15:22.846342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.723 [2024-10-30 14:15:22.850209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.723 [2024-10-30 14:15:22.850273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.723 [2024-10-30 14:15:22.850288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.723 [2024-10-30 14:15:22.856506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.723 [2024-10-30 14:15:22.856712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.723 [2024-10-30 14:15:22.856727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.723 [2024-10-30 14:15:22.864394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.723 [2024-10-30 14:15:22.864657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.723 [2024-10-30 14:15:22.864676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.723 [2024-10-30 14:15:22.868977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.723 [2024-10-30 14:15:22.869033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.723 [2024-10-30 14:15:22.869047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.723 [2024-10-30 14:15:22.873082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.723 [2024-10-30 14:15:22.873145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.723 [2024-10-30 14:15:22.873160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.723 [2024-10-30 14:15:22.876573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.723 [2024-10-30 14:15:22.876616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.723 [2024-10-30 14:15:22.876631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.723 [2024-10-30 14:15:22.880142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.723 [2024-10-30 14:15:22.880186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.723 [2024-10-30 14:15:22.880202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.723 [2024-10-30 14:15:22.886750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.723 [2024-10-30 14:15:22.886814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.723 [2024-10-30 14:15:22.886829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.723 [2024-10-30 14:15:22.892370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.723 [2024-10-30 14:15:22.892416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.723 [2024-10-30 14:15:22.892431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.723 [2024-10-30 14:15:22.896670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.723 [2024-10-30 14:15:22.896734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.723 [2024-10-30 14:15:22.896755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.723 [2024-10-30 14:15:22.900495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.723 [2024-10-30 14:15:22.900551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.723 [2024-10-30 14:15:22.900567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.723 [2024-10-30 14:15:22.908510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.723 [2024-10-30 14:15:22.908555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.723 [2024-10-30 14:15:22.908570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.723 [2024-10-30 14:15:22.912668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.723 [2024-10-30 14:15:22.912711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.723 [2024-10-30 14:15:22.912726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.723 [2024-10-30 14:15:22.916326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.723 [2024-10-30 14:15:22.916371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.723 [2024-10-30 14:15:22.916386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.724 [2024-10-30 14:15:22.920966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.724 [2024-10-30 14:15:22.921031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.724 [2024-10-30 14:15:22.921047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.724 [2024-10-30 14:15:22.928474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.724 [2024-10-30 14:15:22.928528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.724 [2024-10-30 14:15:22.928543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.724 [2024-10-30 14:15:22.937942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.724 [2024-10-30 14:15:22.938018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.724 [2024-10-30 14:15:22.938033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.724 [2024-10-30 14:15:22.944025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.724 [2024-10-30 14:15:22.944074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.724 [2024-10-30 14:15:22.944090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.724 [2024-10-30 14:15:22.947691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.724 [2024-10-30 14:15:22.947737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.724 [2024-10-30 14:15:22.947757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.724 [2024-10-30 14:15:22.951173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.724 [2024-10-30 14:15:22.951219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.724 [2024-10-30 14:15:22.951237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.724 [2024-10-30 14:15:22.956399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.724 [2024-10-30 14:15:22.956446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.724 [2024-10-30 14:15:22.956462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.724 [2024-10-30 14:15:22.960207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.724 [2024-10-30 14:15:22.960270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.724 [2024-10-30 14:15:22.960285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.724 [2024-10-30 14:15:22.963880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.724 [2024-10-30 14:15:22.963934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.724 [2024-10-30 14:15:22.963949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.724 [2024-10-30 14:15:22.967297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.724 [2024-10-30 14:15:22.967341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.724 [2024-10-30 14:15:22.967356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.724 [2024-10-30 14:15:22.970999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.724 [2024-10-30 14:15:22.971043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.724 [2024-10-30 14:15:22.971058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.724 [2024-10-30 14:15:22.974214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.724 [2024-10-30 14:15:22.974269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.724 [2024-10-30 14:15:22.974283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.724 [2024-10-30 14:15:22.978853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.724 [2024-10-30 14:15:22.978914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.724 [2024-10-30 14:15:22.978929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.724 [2024-10-30 14:15:22.985789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.724 [2024-10-30 14:15:22.985866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.724 [2024-10-30 14:15:22.985881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.724 [2024-10-30 14:15:22.991565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.724 [2024-10-30 14:15:22.991632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.724 [2024-10-30 14:15:22.991647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.724 [2024-10-30 14:15:22.995111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.724 [2024-10-30 14:15:22.995159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.724 [2024-10-30 14:15:22.995174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.724 [2024-10-30 14:15:22.998893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.724 [2024-10-30 14:15:22.998937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.724 [2024-10-30 14:15:22.998952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.724 [2024-10-30 14:15:23.002577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.724 [2024-10-30 14:15:23.002643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.724 [2024-10-30 14:15:23.002658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.724 [2024-10-30 14:15:23.006214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.724 [2024-10-30 14:15:23.006265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.724 [2024-10-30 14:15:23.006280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.724 [2024-10-30 14:15:23.009501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.724 [2024-10-30 14:15:23.009542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.724 [2024-10-30 14:15:23.009558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.724 [2024-10-30 14:15:23.012791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.724 [2024-10-30 14:15:23.012836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.724 [2024-10-30 14:15:23.012851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.724 [2024-10-30 14:15:23.016313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.724 [2024-10-30 14:15:23.016359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.724 [2024-10-30 14:15:23.016375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.724 [2024-10-30 14:15:23.019736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.724 [2024-10-30 14:15:23.019829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.724 [2024-10-30 14:15:23.019845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.988 [2024-10-30 14:15:23.026330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.988 [2024-10-30 14:15:23.026385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.988 [2024-10-30 14:15:23.026400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.988 [2024-10-30 14:15:23.029644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.988 [2024-10-30 14:15:23.029694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.988 [2024-10-30 14:15:23.029709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.988 [2024-10-30 14:15:23.033165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.988 [2024-10-30 14:15:23.033215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.988 [2024-10-30 14:15:23.033230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.988 [2024-10-30 14:15:23.036852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.988 [2024-10-30 14:15:23.036911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.036927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.045069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.045128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.045143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.048662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.048725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.048740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.052692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.052763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.052778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.059462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.059767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.059785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.068448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.068535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.068553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.078394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.078643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.078660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.088744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.089064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.089081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.099486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.099584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.099600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.110128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.110387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.110404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.121168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.121465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.121482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.128710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.128782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.128798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.134344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.134398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.134414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.138383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.138507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.138523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.143154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.143303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.143318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.150323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.150596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.150613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.155173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.155242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.155257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.161908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.161959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.161974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.169017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.169088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.169103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.173098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.173141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.173157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.178107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.178151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.178166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.184355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.184398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.184413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.191739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.191798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.191813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.197449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.197499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.197515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.203371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.203419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.203434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.209436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.209507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.209522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.212877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.212943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.212958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.216535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.216579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.216595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.220212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.220270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.220284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.223900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.223943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.223959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.227228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.227285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.227300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.231868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.231914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.231932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.235281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.235334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.235349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.238801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.238856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.238871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.242390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.242444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.242460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.245942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.246005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.246020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.251463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.251521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.251536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.989 [2024-10-30 14:15:23.254774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.989 [2024-10-30 14:15:23.254834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.989 [2024-10-30 14:15:23.254849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.990 [2024-10-30 14:15:23.258068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.990 [2024-10-30 14:15:23.258130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.990 [2024-10-30 14:15:23.258145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.990 [2024-10-30 14:15:23.261371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.990 [2024-10-30 14:15:23.261439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.990 [2024-10-30 14:15:23.261455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.990 [2024-10-30 14:15:23.264470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.990 [2024-10-30 14:15:23.264532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.990 [2024-10-30 14:15:23.264548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.990 [2024-10-30 14:15:23.267775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.990 [2024-10-30 14:15:23.267834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.990 [2024-10-30 14:15:23.267849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.990 [2024-10-30 14:15:23.270982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.990 [2024-10-30 14:15:23.271051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.990 [2024-10-30 14:15:23.271067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.990 [2024-10-30 14:15:23.273996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.990 [2024-10-30 14:15:23.274053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.990 [2024-10-30 14:15:23.274068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.990 [2024-10-30 14:15:23.276858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.990 [2024-10-30 14:15:23.276913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.990 [2024-10-30 14:15:23.276928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.990 [2024-10-30 14:15:23.280064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.990 [2024-10-30 14:15:23.280131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.990 [2024-10-30 14:15:23.280146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.990 [2024-10-30 14:15:23.283116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.990 [2024-10-30 14:15:23.283171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.990 [2024-10-30 14:15:23.283186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.990 [2024-10-30 14:15:23.286388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:24.990 [2024-10-30 14:15:23.286442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.990 [2024-10-30 14:15:23.286458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.253 [2024-10-30 14:15:23.289226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.253 [2024-10-30 14:15:23.289289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.253 [2024-10-30 14:15:23.289305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.253 [2024-10-30 14:15:23.292432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.253 [2024-10-30 14:15:23.292545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.253 [2024-10-30 14:15:23.292560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.253 [2024-10-30 14:15:23.296255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.253 [2024-10-30 14:15:23.296329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.253 [2024-10-30 14:15:23.296345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.253 [2024-10-30 14:15:23.299157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.253 [2024-10-30 14:15:23.299220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.253 [2024-10-30 14:15:23.299235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.253 [2024-10-30 14:15:23.302035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.253 [2024-10-30 14:15:23.302102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.253 [2024-10-30 14:15:23.302117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.253 [2024-10-30 14:15:23.305031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.253 [2024-10-30 14:15:23.305085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.253 [2024-10-30 14:15:23.305100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.253 [2024-10-30 14:15:23.308253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.253 [2024-10-30 14:15:23.308367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.253 [2024-10-30 14:15:23.308382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.253 [2024-10-30 14:15:23.311965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.253 [2024-10-30 14:15:23.312021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.253 [2024-10-30 14:15:23.312036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.253 [2024-10-30 14:15:23.314930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.253 [2024-10-30 14:15:23.314991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.253 [2024-10-30 14:15:23.315006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.253 [2024-10-30 14:15:23.317834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.253 [2024-10-30 14:15:23.317892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.253 [2024-10-30 14:15:23.317910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.253 [2024-10-30 14:15:23.320731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.253 [2024-10-30 14:15:23.320795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.253 [2024-10-30 14:15:23.320811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.253 [2024-10-30 14:15:23.323628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.253 [2024-10-30 14:15:23.323690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.253 [2024-10-30 14:15:23.323705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.253 [2024-10-30 14:15:23.326516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.253 [2024-10-30 14:15:23.326588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.253 [2024-10-30 14:15:23.326603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.253 [2024-10-30 14:15:23.329366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.253 [2024-10-30 14:15:23.329498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.253 [2024-10-30 14:15:23.329514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.253 [2024-10-30 14:15:23.333276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.253 [2024-10-30 14:15:23.333370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.253 [2024-10-30 14:15:23.333386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.253 [2024-10-30 14:15:23.337870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.338152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.338169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.343931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.343993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.344008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.349264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.349404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.349419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.355371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.355492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.355507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.360427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.360485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.360500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.366289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.366354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.366369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.369780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.369839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.369854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.373136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.373197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.373212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.376393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.376449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.376465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.380388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.380453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.380468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.387430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.387660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.387676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.393816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.393877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.393893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.397298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.397361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.397376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.400814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.400880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.400894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.406598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.406803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.406819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.416287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.416329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.416344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.422405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.422719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.422736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.429197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.429258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.429272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.435503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.435553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.435568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.441477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.441547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.441562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.446256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.446302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.446320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.450018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.450078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.450093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.453788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.453849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.453863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.457497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.457542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.457557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.461118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.461168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.461183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.464860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.464915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.464930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.471963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.254 [2024-10-30 14:15:23.472034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.254 [2024-10-30 14:15:23.472049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.254 [2024-10-30 14:15:23.475471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.255 [2024-10-30 14:15:23.475519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.255 [2024-10-30 14:15:23.475534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.255 [2024-10-30 14:15:23.478951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.255 [2024-10-30 14:15:23.479002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.255 [2024-10-30 14:15:23.479018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.255 [2024-10-30 14:15:23.482400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.255 [2024-10-30 14:15:23.482459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.255 [2024-10-30 14:15:23.482474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.255 [2024-10-30 14:15:23.485914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.255 [2024-10-30 14:15:23.485973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.255 [2024-10-30 14:15:23.485988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.255 [2024-10-30 14:15:23.489389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.255 [2024-10-30 14:15:23.489434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.255 [2024-10-30 14:15:23.489449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.255 [2024-10-30 14:15:23.493051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.255 [2024-10-30 14:15:23.493102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.255 [2024-10-30 14:15:23.493116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.255 [2024-10-30 14:15:23.496654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.255 [2024-10-30 14:15:23.496699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.255 [2024-10-30 14:15:23.496714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.255 [2024-10-30 14:15:23.500030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.255 [2024-10-30 14:15:23.500080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.255 [2024-10-30 14:15:23.500095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.255 [2024-10-30 14:15:23.503457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.255 [2024-10-30 14:15:23.503518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.255 [2024-10-30 14:15:23.503533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.255 [2024-10-30 14:15:23.506889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.255 [2024-10-30 14:15:23.506932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.255 [2024-10-30 14:15:23.506947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.255 [2024-10-30 14:15:23.510471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.255 [2024-10-30 14:15:23.510522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.255 [2024-10-30 14:15:23.510538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.255 [2024-10-30 14:15:23.513696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.255 [2024-10-30 14:15:23.513750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.255 [2024-10-30 14:15:23.513765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.255 [2024-10-30 14:15:23.517011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.255 [2024-10-30 14:15:23.517067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.255 [2024-10-30 14:15:23.517081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.255 [2024-10-30 14:15:23.520255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.255 [2024-10-30 14:15:23.520301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.255 [2024-10-30 14:15:23.520316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.255 [2024-10-30 14:15:23.523333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.255 [2024-10-30 14:15:23.523377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.255 [2024-10-30 14:15:23.523392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.255 [2024-10-30 14:15:23.528219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.255 [2024-10-30 14:15:23.528266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.255 [2024-10-30 14:15:23.528281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.255 [2024-10-30 14:15:23.533676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.255 [2024-10-30 14:15:23.533719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.255 [2024-10-30 14:15:23.533735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.255 [2024-10-30 14:15:23.538697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.255 [2024-10-30 14:15:23.538764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.255 [2024-10-30 14:15:23.538779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.255 [2024-10-30 14:15:23.543581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.255 [2024-10-30 14:15:23.543659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.255 [2024-10-30 14:15:23.543674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.255 [2024-10-30 14:15:23.547492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.255 [2024-10-30 14:15:23.547570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.255 [2024-10-30 14:15:23.547588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.516 6527.00 IOPS, 815.88 MiB/s [2024-10-30T13:15:23.815Z] [2024-10-30 14:15:23.556663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.516 [2024-10-30 14:15:23.556963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.516 [2024-10-30 14:15:23.556980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.516 [2024-10-30 14:15:23.563183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.516 [2024-10-30 14:15:23.563265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.516 [2024-10-30 14:15:23.563280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.516 [2024-10-30 14:15:23.567347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.516 [2024-10-30 14:15:23.567417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.516 [2024-10-30 14:15:23.567433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.516 [2024-10-30 14:15:23.575540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.516 [2024-10-30 14:15:23.575588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.516 [2024-10-30 14:15:23.575604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.516 [2024-10-30 14:15:23.584651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.516 [2024-10-30 14:15:23.584705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.516 [2024-10-30 14:15:23.584720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.516 [2024-10-30 14:15:23.591196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.516 [2024-10-30 14:15:23.591326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.516 [2024-10-30 14:15:23.591342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.516 [2024-10-30 14:15:23.597660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.516 [2024-10-30 14:15:23.597705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.516 [2024-10-30 14:15:23.597720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.516 [2024-10-30 14:15:23.602206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.516 [2024-10-30 14:15:23.602251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.516 [2024-10-30 14:15:23.602266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.516 [2024-10-30 14:15:23.609285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.516 [2024-10-30 14:15:23.609379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.516 [2024-10-30 14:15:23.609395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.516 [2024-10-30 14:15:23.619276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.516 [2024-10-30 14:15:23.619345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.516 [2024-10-30 14:15:23.619361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.627469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.627528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.627543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.634000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.634331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.634348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.641795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.642012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.642028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.649508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.649579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.649594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.657255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.657542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.657559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.665056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.665103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.665118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.672699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.672892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.672911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.682187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.682251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.682266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.689642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.689702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.689717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.695129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.695183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.695198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.701444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.701512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.701528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.706900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.706962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.706977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.710930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.710987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.711003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.718332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.718395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.718409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.725385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.725457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.725527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.733371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.733459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.733475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.741227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.741418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.741434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.749082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.749170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.749185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.757826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.758127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.758144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.766043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.766300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.766317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.772521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.772568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.772583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.779372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.779626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.779643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.789950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.790251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.790268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.798697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.799015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.799032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.517 [2024-10-30 14:15:23.808668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.517 [2024-10-30 14:15:23.808920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.517 [2024-10-30 14:15:23.808936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.779 [2024-10-30 14:15:23.819955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.779 [2024-10-30 14:15:23.820250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.779 [2024-10-30 14:15:23.820267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.779 [2024-10-30 14:15:23.831580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.779 [2024-10-30 14:15:23.831856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.779 [2024-10-30 14:15:23.831872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.779 [2024-10-30 14:15:23.843146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.779 [2024-10-30 14:15:23.843324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.779 [2024-10-30 14:15:23.843339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.779 [2024-10-30 14:15:23.854639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.779 [2024-10-30 14:15:23.854850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.779 [2024-10-30 14:15:23.854866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.779 [2024-10-30 14:15:23.866655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.779 [2024-10-30 14:15:23.866935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.779 [2024-10-30 14:15:23.866952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.779 [2024-10-30 14:15:23.877371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.779 [2024-10-30 14:15:23.877664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.779 [2024-10-30 14:15:23.877681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.779 [2024-10-30 14:15:23.888496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.779 [2024-10-30 14:15:23.888798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.779 [2024-10-30 14:15:23.888814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.779 [2024-10-30 14:15:23.899202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.779 [2024-10-30 14:15:23.899476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.779 [2024-10-30 14:15:23.899494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.779 [2024-10-30 14:15:23.910481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.779 [2024-10-30 14:15:23.910697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.779 [2024-10-30 14:15:23.910712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.779 [2024-10-30 14:15:23.922052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.779 [2024-10-30 14:15:23.922355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.779 [2024-10-30 14:15:23.922372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.779 [2024-10-30 14:15:23.933545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.779 [2024-10-30 14:15:23.933808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.779 [2024-10-30 14:15:23.933824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.779 [2024-10-30 14:15:23.945197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.779 [2024-10-30 14:15:23.945431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.779 [2024-10-30 14:15:23.945446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.779 [2024-10-30 14:15:23.956025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.779 [2024-10-30 14:15:23.956300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.779 [2024-10-30 14:15:23.956316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.779 [2024-10-30 14:15:23.964797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.779 [2024-10-30 14:15:23.964896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.779 [2024-10-30 14:15:23.964912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.779 [2024-10-30 14:15:23.974204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.779 [2024-10-30 14:15:23.974267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.779 [2024-10-30 14:15:23.974282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.779 [2024-10-30 14:15:23.981925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.779 [2024-10-30 14:15:23.981995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.779 [2024-10-30 14:15:23.982010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.779 [2024-10-30 14:15:23.988198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.779 [2024-10-30 14:15:23.988501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.779 [2024-10-30 14:15:23.988517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.779 [2024-10-30 14:15:23.998239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.779 [2024-10-30 14:15:23.998294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.779 [2024-10-30 14:15:23.998310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.780 [2024-10-30 14:15:24.006369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.780 [2024-10-30 14:15:24.006422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.780 [2024-10-30 14:15:24.006438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.780 [2024-10-30 14:15:24.013719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.780 [2024-10-30 14:15:24.014015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.780 [2024-10-30 14:15:24.014032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.780 [2024-10-30 14:15:24.021872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.780 [2024-10-30 14:15:24.021940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.780 [2024-10-30 14:15:24.021955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.780 [2024-10-30 14:15:24.029803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.780 [2024-10-30 14:15:24.029869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.780 [2024-10-30 14:15:24.029885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.780 [2024-10-30 14:15:24.036644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.780 [2024-10-30 14:15:24.036709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.780 [2024-10-30 14:15:24.036724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.780 [2024-10-30 14:15:24.042425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.780 [2024-10-30 14:15:24.042492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.780 [2024-10-30 14:15:24.042507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.780 [2024-10-30 14:15:24.051649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.780 [2024-10-30 14:15:24.051699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.780 [2024-10-30 14:15:24.051715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.780 [2024-10-30 14:15:24.059945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.780 [2024-10-30 14:15:24.060068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.780 [2024-10-30 14:15:24.060083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.780 [2024-10-30 14:15:24.069739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.780 [2024-10-30 14:15:24.069814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.780 [2024-10-30 14:15:24.069829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.780 [2024-10-30 14:15:24.076251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:25.780 [2024-10-30 14:15:24.076320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.780 [2024-10-30 14:15:24.076335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.043 [2024-10-30 14:15:24.083942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.043 [2024-10-30 14:15:24.084008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.043 [2024-10-30 14:15:24.084022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.043 [2024-10-30 14:15:24.094256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.043 [2024-10-30 14:15:24.094326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.043 [2024-10-30 14:15:24.094341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.043 [2024-10-30 14:15:24.102090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.043 [2024-10-30 14:15:24.102265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.043 [2024-10-30 14:15:24.102280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.043 [2024-10-30 14:15:24.109580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.043 [2024-10-30 14:15:24.109728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.043 [2024-10-30 14:15:24.109744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.043 [2024-10-30 14:15:24.117264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.043 [2024-10-30 14:15:24.117320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.043 [2024-10-30 14:15:24.117335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.043 [2024-10-30 14:15:24.123833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.043 [2024-10-30 14:15:24.123877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.043 [2024-10-30 14:15:24.123892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.043 [2024-10-30 14:15:24.130261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.043 [2024-10-30 14:15:24.130323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.043 [2024-10-30 14:15:24.130338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.043 [2024-10-30 14:15:24.137795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.043 [2024-10-30 14:15:24.138067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.043 [2024-10-30 14:15:24.138085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.043 [2024-10-30 14:15:24.146159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.043 [2024-10-30 14:15:24.146414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.043 [2024-10-30 14:15:24.146431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.043 [2024-10-30 14:15:24.151366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.043 [2024-10-30 14:15:24.151412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.043 [2024-10-30 14:15:24.151428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.043 [2024-10-30 14:15:24.157898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.043 [2024-10-30 14:15:24.157969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.043 [2024-10-30 14:15:24.157985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.043 [2024-10-30 14:15:24.166944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.043 [2024-10-30 14:15:24.167103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.043 [2024-10-30 14:15:24.167119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.043 [2024-10-30 14:15:24.176427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.043 [2024-10-30 14:15:24.176482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.043 [2024-10-30 14:15:24.176497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.043 [2024-10-30 14:15:24.184943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.043 [2024-10-30 14:15:24.185181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.043 [2024-10-30 14:15:24.185197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.043 [2024-10-30 14:15:24.193030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.043 [2024-10-30 14:15:24.193083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.043 [2024-10-30 14:15:24.193101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.043 [2024-10-30 14:15:24.200227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.043 [2024-10-30 14:15:24.200291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.043 [2024-10-30 14:15:24.200306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.043 [2024-10-30 14:15:24.209507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.043 [2024-10-30 14:15:24.209614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.043 [2024-10-30 14:15:24.209630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.043 [2024-10-30 14:15:24.219490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.043 [2024-10-30 14:15:24.219607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.043 [2024-10-30 14:15:24.219623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.043 [2024-10-30 14:15:24.230135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.043 [2024-10-30 14:15:24.230358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.043 [2024-10-30 14:15:24.230373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.043 [2024-10-30 14:15:24.241029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.043 [2024-10-30 14:15:24.241075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.043 [2024-10-30 14:15:24.241091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.043 [2024-10-30 14:15:24.250952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.043 [2024-10-30 14:15:24.251087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.043 [2024-10-30 14:15:24.251103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.043 [2024-10-30 14:15:24.262135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.043 [2024-10-30 14:15:24.262428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.043 [2024-10-30 14:15:24.262445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.043 [2024-10-30 14:15:24.272696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.043 [2024-10-30 14:15:24.272980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.043 [2024-10-30 14:15:24.272996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.043 [2024-10-30 14:15:24.282939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.043 [2024-10-30 14:15:24.283181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.044 [2024-10-30 14:15:24.283197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.044 [2024-10-30 14:15:24.293431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.044 [2024-10-30 14:15:24.293690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.044 [2024-10-30 14:15:24.293706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.044 [2024-10-30 14:15:24.304067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.044 [2024-10-30 14:15:24.304279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.044 [2024-10-30 14:15:24.304295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.044 [2024-10-30 14:15:24.313906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.044 [2024-10-30 14:15:24.314212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.044 [2024-10-30 14:15:24.314229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.044 [2024-10-30 14:15:24.324047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.044 [2024-10-30 14:15:24.324301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.044 [2024-10-30 14:15:24.324318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.044 [2024-10-30 14:15:24.334593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.044 [2024-10-30 14:15:24.334832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.044 [2024-10-30 14:15:24.334848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.305 [2024-10-30 14:15:24.345097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.305 [2024-10-30 14:15:24.345354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.305 [2024-10-30 14:15:24.345371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.305 [2024-10-30 14:15:24.354782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.305 [2024-10-30 14:15:24.355023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.305 [2024-10-30 14:15:24.355038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.305 [2024-10-30 14:15:24.363779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.305 [2024-10-30 14:15:24.364037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.305 [2024-10-30 14:15:24.364052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.305 [2024-10-30 14:15:24.374230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.305 [2024-10-30 14:15:24.374513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.305 [2024-10-30 14:15:24.374530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.305 [2024-10-30 14:15:24.384716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.305 [2024-10-30 14:15:24.384908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.305 [2024-10-30 14:15:24.384924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.305 [2024-10-30 14:15:24.395424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.305 [2024-10-30 14:15:24.395653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.305 [2024-10-30 14:15:24.395669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.305 [2024-10-30 14:15:24.406031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.305 [2024-10-30 14:15:24.406311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.305 [2024-10-30 14:15:24.406327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.305 [2024-10-30 14:15:24.416220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.305 [2024-10-30 14:15:24.416542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.305 [2024-10-30 14:15:24.416559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.305 [2024-10-30 14:15:24.426788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.305 [2024-10-30 14:15:24.427051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.305 [2024-10-30 14:15:24.427068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.305 [2024-10-30 14:15:24.437583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.305 [2024-10-30 14:15:24.437869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.305 [2024-10-30 14:15:24.437886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.305 [2024-10-30 14:15:24.448092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.305 [2024-10-30 14:15:24.448356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.305 [2024-10-30 14:15:24.448372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.305 [2024-10-30 14:15:24.458142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.305 [2024-10-30 14:15:24.458442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.305 [2024-10-30 14:15:24.458462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.305 [2024-10-30 14:15:24.468208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.305 [2024-10-30 14:15:24.468465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.305 [2024-10-30 14:15:24.468482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.305 [2024-10-30 14:15:24.478435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.305 [2024-10-30 14:15:24.478687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.305 [2024-10-30 14:15:24.478704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.305 [2024-10-30 14:15:24.489080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.305 [2024-10-30 14:15:24.489346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.305 [2024-10-30 14:15:24.489363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.305 [2024-10-30 14:15:24.499527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.305 [2024-10-30 14:15:24.499707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.305 [2024-10-30 14:15:24.499723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.305 [2024-10-30 14:15:24.510058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.305 [2024-10-30 14:15:24.510309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.305 [2024-10-30 14:15:24.510324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.305 [2024-10-30 14:15:24.520345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.305 [2024-10-30 14:15:24.520766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.305 [2024-10-30 14:15:24.520783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.306 [2024-10-30 14:15:24.530567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.306 [2024-10-30 14:15:24.530643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.306 [2024-10-30 14:15:24.530658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.306 [2024-10-30 14:15:24.540172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.306 [2024-10-30 14:15:24.540415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.306 [2024-10-30 14:15:24.540431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.306 [2024-10-30 14:15:24.551209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc27b20) with pdu=0x2000166fef90 00:28:26.306 [2024-10-30 14:15:24.551462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.306 [2024-10-30 14:15:24.551479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.306 5010.00 IOPS, 626.25 MiB/s 00:28:26.306 Latency(us) 00:28:26.306 [2024-10-30T13:15:24.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.306 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:26.306 nvme0n1 : 2.01 5003.06 625.38 0.00 0.00 3191.38 1372.16 11960.32 00:28:26.306 [2024-10-30T13:15:24.605Z] =================================================================================================================== 00:28:26.306 [2024-10-30T13:15:24.605Z] Total : 5003.06 625.38 0.00 0.00 3191.38 1372.16 11960.32 00:28:26.306 { 00:28:26.306 "results": [ 00:28:26.306 { 00:28:26.306 "job": "nvme0n1", 00:28:26.306 "core_mask": "0x2", 00:28:26.306 "workload": "randwrite", 00:28:26.306 "status": "finished", 00:28:26.306 "queue_depth": 16, 00:28:26.306 "io_size": 131072, 00:28:26.306 "runtime": 2.006572, 00:28:26.306 "iops": 5003.059945020662, 00:28:26.306 "mibps": 625.3824931275827, 00:28:26.306 "io_failed": 0, 00:28:26.306 "io_timeout": 0, 00:28:26.306 "avg_latency_us": 3191.3820048477605, 00:28:26.306 "min_latency_us": 1372.16, 00:28:26.306 "max_latency_us": 11960.32 00:28:26.306 } 00:28:26.306 ], 00:28:26.306 "core_count": 1 00:28:26.306 } 00:28:26.306 14:15:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:26.306 14:15:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:26.306 14:15:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:26.306 | .driver_specific 00:28:26.306 | .nvme_error 00:28:26.306 | .status_code 00:28:26.306 | .command_transient_transport_error' 00:28:26.306 14:15:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:26.566 14:15:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 323 > 0 )) 00:28:26.566 14:15:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1197889 00:28:26.567 14:15:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1197889 ']' 00:28:26.567 14:15:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1197889 00:28:26.567 14:15:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:26.567 14:15:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:26.567 14:15:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1197889 00:28:26.567 14:15:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:26.567 14:15:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:26.567 14:15:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1197889' 00:28:26.567 killing process with pid 1197889 00:28:26.567 14:15:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1197889 00:28:26.567 Received shutdown signal, test time was about 2.000000 seconds 00:28:26.567 00:28:26.567 Latency(us) 00:28:26.567 [2024-10-30T13:15:24.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.567 [2024-10-30T13:15:24.866Z] =================================================================================================================== 00:28:26.567 [2024-10-30T13:15:24.866Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:26.567 14:15:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1197889 00:28:26.827 14:15:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1195026 00:28:26.827 14:15:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1195026 ']' 00:28:26.827 14:15:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1195026 00:28:26.827 14:15:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:26.827 14:15:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:26.827 14:15:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1195026 00:28:26.827 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:26.827 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:26.827 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1195026' 00:28:26.827 killing process with pid 1195026 00:28:26.827 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1195026 00:28:26.827 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1195026 00:28:26.827 00:28:26.827 real 0m16.557s 00:28:26.827 user 0m32.782s 00:28:26.827 sys 0m3.569s 00:28:26.827 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:26.827 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.827 ************************************ 00:28:26.827 END TEST nvmf_digest_error 00:28:26.827 ************************************ 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:27.088 rmmod nvme_tcp 00:28:27.088 rmmod nvme_fabrics 00:28:27.088 rmmod nvme_keyring 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1195026 ']' 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1195026 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1195026 ']' 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1195026 00:28:27.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1195026) - No such process 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1195026 is not found' 00:28:27.088 Process with pid 1195026 is not found 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:27.088 14:15:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:29.634 00:28:29.634 real 0m43.213s 00:28:29.634 user 1m7.924s 00:28:29.634 sys 0m13.005s 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:29.634 ************************************ 00:28:29.634 END TEST nvmf_digest 00:28:29.634 ************************************ 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.634 ************************************ 00:28:29.634 START TEST nvmf_bdevperf 00:28:29.634 ************************************ 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:29.634 * Looking for test storage... 00:28:29.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:29.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.634 --rc genhtml_branch_coverage=1 00:28:29.634 --rc genhtml_function_coverage=1 00:28:29.634 --rc genhtml_legend=1 00:28:29.634 --rc geninfo_all_blocks=1 00:28:29.634 --rc geninfo_unexecuted_blocks=1 00:28:29.634 00:28:29.634 ' 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:29.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.634 --rc genhtml_branch_coverage=1 00:28:29.634 --rc genhtml_function_coverage=1 00:28:29.634 --rc genhtml_legend=1 00:28:29.634 --rc geninfo_all_blocks=1 00:28:29.634 --rc geninfo_unexecuted_blocks=1 00:28:29.634 00:28:29.634 ' 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:29.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.634 --rc genhtml_branch_coverage=1 00:28:29.634 --rc genhtml_function_coverage=1 00:28:29.634 --rc genhtml_legend=1 00:28:29.634 --rc geninfo_all_blocks=1 00:28:29.634 --rc geninfo_unexecuted_blocks=1 00:28:29.634 00:28:29.634 ' 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:29.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.634 --rc genhtml_branch_coverage=1 00:28:29.634 --rc genhtml_function_coverage=1 00:28:29.634 --rc genhtml_legend=1 00:28:29.634 --rc geninfo_all_blocks=1 00:28:29.634 --rc geninfo_unexecuted_blocks=1 00:28:29.634 00:28:29.634 ' 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:29.634 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:29.635 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:29.635 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:29.635 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:29.635 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:29.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:29.635 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:29.635 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:29.635 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:29.635 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:29.635 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:29.635 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:29.635 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:29.635 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:29.635 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:29.635 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:29.635 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:29.635 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.635 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:29.635 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.635 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:29.635 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:29.635 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:29.635 14:15:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:37.785 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:37.785 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.785 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:37.786 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:37.786 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:37.786 14:15:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:37.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:37.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.715 ms 00:28:37.786 00:28:37.786 --- 10.0.0.2 ping statistics --- 00:28:37.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.786 rtt min/avg/max/mdev = 0.715/0.715/0.715/0.000 ms 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:37.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:37.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:28:37.786 00:28:37.786 --- 10.0.0.1 ping statistics --- 00:28:37.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.786 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1202861 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1202861 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1202861 ']' 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:37.786 14:15:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:37.786 [2024-10-30 14:15:35.240348] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:28:37.786 [2024-10-30 14:15:35.240412] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:37.786 [2024-10-30 14:15:35.341966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:37.786 [2024-10-30 14:15:35.394220] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:37.786 [2024-10-30 14:15:35.394275] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:37.786 [2024-10-30 14:15:35.394284] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:37.786 [2024-10-30 14:15:35.394291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:37.786 [2024-10-30 14:15:35.394298] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:37.786 [2024-10-30 14:15:35.396171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:37.786 [2024-10-30 14:15:35.396329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.786 [2024-10-30 14:15:35.396330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:37.786 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:37.786 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:37.786 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:37.786 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:37.786 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.048 [2024-10-30 14:15:36.123208] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.048 Malloc0 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.048 [2024-10-30 14:15:36.196175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:38.048 { 00:28:38.048 "params": { 00:28:38.048 "name": "Nvme$subsystem", 00:28:38.048 "trtype": "$TEST_TRANSPORT", 00:28:38.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.048 "adrfam": "ipv4", 00:28:38.048 "trsvcid": "$NVMF_PORT", 00:28:38.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.048 "hdgst": ${hdgst:-false}, 00:28:38.048 "ddgst": ${ddgst:-false} 00:28:38.048 }, 00:28:38.048 "method": "bdev_nvme_attach_controller" 00:28:38.048 } 00:28:38.048 EOF 00:28:38.048 )") 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:38.048 14:15:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:38.048 "params": { 00:28:38.048 "name": "Nvme1", 00:28:38.048 "trtype": "tcp", 00:28:38.048 "traddr": "10.0.0.2", 00:28:38.048 "adrfam": "ipv4", 00:28:38.048 "trsvcid": "4420", 00:28:38.048 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:38.048 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:38.048 "hdgst": false, 00:28:38.048 "ddgst": false 00:28:38.048 }, 00:28:38.048 "method": "bdev_nvme_attach_controller" 00:28:38.048 }' 00:28:38.048 [2024-10-30 14:15:36.263146] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:28:38.048 [2024-10-30 14:15:36.263215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202945 ] 00:28:38.310 [2024-10-30 14:15:36.355079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.310 [2024-10-30 14:15:36.407653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.310 Running I/O for 1 seconds... 00:28:39.699 8599.00 IOPS, 33.59 MiB/s 00:28:39.699 Latency(us) 00:28:39.699 [2024-10-30T13:15:37.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.699 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:39.699 Verification LBA range: start 0x0 length 0x4000 00:28:39.699 Nvme1n1 : 1.01 8623.90 33.69 0.00 0.00 14776.76 2266.45 13926.40 00:28:39.699 [2024-10-30T13:15:37.998Z] =================================================================================================================== 00:28:39.699 [2024-10-30T13:15:37.998Z] Total : 8623.90 33.69 0.00 0.00 14776.76 2266.45 13926.40 00:28:39.699 14:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1203283 00:28:39.699 14:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:39.699 14:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:39.699 14:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:39.699 14:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:39.699 14:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:39.699 14:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:39.699 14:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:39.699 { 00:28:39.699 "params": { 00:28:39.699 "name": "Nvme$subsystem", 00:28:39.699 "trtype": "$TEST_TRANSPORT", 00:28:39.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:39.699 "adrfam": "ipv4", 00:28:39.699 "trsvcid": "$NVMF_PORT", 00:28:39.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:39.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:39.699 "hdgst": ${hdgst:-false}, 00:28:39.699 "ddgst": ${ddgst:-false} 00:28:39.699 }, 00:28:39.699 "method": "bdev_nvme_attach_controller" 00:28:39.699 } 00:28:39.699 EOF 00:28:39.699 )") 00:28:39.699 14:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:39.699 14:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:39.699 14:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:39.699 14:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:39.699 "params": { 00:28:39.699 "name": "Nvme1", 00:28:39.699 "trtype": "tcp", 00:28:39.699 "traddr": "10.0.0.2", 00:28:39.699 "adrfam": "ipv4", 00:28:39.699 "trsvcid": "4420", 00:28:39.699 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:39.699 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:39.699 "hdgst": false, 00:28:39.699 "ddgst": false 00:28:39.699 }, 00:28:39.699 "method": "bdev_nvme_attach_controller" 00:28:39.699 }' 00:28:39.699 [2024-10-30 14:15:37.803452] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:28:39.699 [2024-10-30 14:15:37.803535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1203283 ] 00:28:39.699 [2024-10-30 14:15:37.896063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.699 [2024-10-30 14:15:37.948888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.961 Running I/O for 15 seconds... 00:28:42.287 11090.00 IOPS, 43.32 MiB/s [2024-10-30T13:15:40.851Z] 11285.00 IOPS, 44.08 MiB/s [2024-10-30T13:15:40.851Z] 14:15:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1202861 00:28:42.552 14:15:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:42.552 [2024-10-30 14:15:40.755238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.552 [2024-10-30 14:15:40.755279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.552 [2024-10-30 14:15:40.755300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.552 [2024-10-30 14:15:40.755316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.552 [2024-10-30 14:15:40.755328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.552 [2024-10-30 14:15:40.755338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.552 [2024-10-30 14:15:40.755347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.552 [2024-10-30 14:15:40.755357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.552 [2024-10-30 14:15:40.755368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.552 [2024-10-30 14:15:40.755377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.552 [2024-10-30 14:15:40.755389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.552 [2024-10-30 14:15:40.755397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.552 [2024-10-30 14:15:40.755407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.552 [2024-10-30 14:15:40.755415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.552 [2024-10-30 14:15:40.755426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.552 [2024-10-30 14:15:40.755434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.552 [2024-10-30 14:15:40.755443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.552 [2024-10-30 14:15:40.755452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.552 [2024-10-30 14:15:40.755463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.552 [2024-10-30 14:15:40.755471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.552 [2024-10-30 14:15:40.755483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.552 [2024-10-30 14:15:40.755493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.552 [2024-10-30 14:15:40.755504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.552 [2024-10-30 14:15:40.755516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.552 [2024-10-30 14:15:40.755527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.552 [2024-10-30 14:15:40.755536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.552 [2024-10-30 14:15:40.755547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.552 [2024-10-30 14:15:40.755557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.552 [2024-10-30 14:15:40.755570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.552 [2024-10-30 14:15:40.755582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.552 [2024-10-30 14:15:40.755594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.552 [2024-10-30 14:15:40.755603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.552 [2024-10-30 14:15:40.755614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.552 [2024-10-30 14:15:40.755623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.755633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.553 [2024-10-30 14:15:40.755640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.755650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.553 [2024-10-30 14:15:40.755659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.755669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.553 [2024-10-30 14:15:40.755677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.755687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.553 [2024-10-30 14:15:40.755694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.755704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.553 [2024-10-30 14:15:40.755713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.755724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.553 [2024-10-30 14:15:40.755732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.755744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.553 [2024-10-30 14:15:40.755842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.755852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.553 [2024-10-30 14:15:40.755860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.755870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.553 [2024-10-30 14:15:40.755877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.755887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.553 [2024-10-30 14:15:40.755897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.755907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.755914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.755924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.755932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.755941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.755949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.755959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.755966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.755975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.755983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.755992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.755999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.553 [2024-10-30 14:15:40.756436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.553 [2024-10-30 14:15:40.756443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.756983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.756993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.757000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.757010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.757017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.757026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.554 [2024-10-30 14:15:40.757033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.554 [2024-10-30 14:15:40.757043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.555 [2024-10-30 14:15:40.757613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea430 is same with the state(6) to be set 00:28:42.555 [2024-10-30 14:15:40.757634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:42.555 [2024-10-30 14:15:40.757641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:42.555 [2024-10-30 14:15:40.757648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95480 len:8 PRP1 0x0 PRP2 0x0 00:28:42.555 [2024-10-30 14:15:40.757655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.555 [2024-10-30 14:15:40.757750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.555 [2024-10-30 14:15:40.757767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.555 [2024-10-30 14:15:40.757783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.555 [2024-10-30 14:15:40.757799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.555 [2024-10-30 14:15:40.757807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.555 [2024-10-30 14:15:40.761388] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.555 [2024-10-30 14:15:40.761409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.555 [2024-10-30 14:15:40.762242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.555 [2024-10-30 14:15:40.762279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.555 [2024-10-30 14:15:40.762291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.556 [2024-10-30 14:15:40.762529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.556 [2024-10-30 14:15:40.762760] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.556 [2024-10-30 14:15:40.762770] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.556 [2024-10-30 14:15:40.762779] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.556 [2024-10-30 14:15:40.766273] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.556 [2024-10-30 14:15:40.775539] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.556 [2024-10-30 14:15:40.776127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.556 [2024-10-30 14:15:40.776147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.556 [2024-10-30 14:15:40.776155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.556 [2024-10-30 14:15:40.776372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.556 [2024-10-30 14:15:40.776593] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.556 [2024-10-30 14:15:40.776603] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.556 [2024-10-30 14:15:40.776610] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.556 [2024-10-30 14:15:40.780108] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.556 [2024-10-30 14:15:40.789364] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.556 [2024-10-30 14:15:40.790059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.556 [2024-10-30 14:15:40.790098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.556 [2024-10-30 14:15:40.790110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.556 [2024-10-30 14:15:40.790347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.556 [2024-10-30 14:15:40.790568] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.556 [2024-10-30 14:15:40.790578] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.556 [2024-10-30 14:15:40.790586] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.556 [2024-10-30 14:15:40.794090] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.556 [2024-10-30 14:15:40.803147] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.556 [2024-10-30 14:15:40.803755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.556 [2024-10-30 14:15:40.803796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.556 [2024-10-30 14:15:40.803808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.556 [2024-10-30 14:15:40.804048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.556 [2024-10-30 14:15:40.804269] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.556 [2024-10-30 14:15:40.804279] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.556 [2024-10-30 14:15:40.804287] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.556 [2024-10-30 14:15:40.807796] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.556 [2024-10-30 14:15:40.817062] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.556 [2024-10-30 14:15:40.817736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.556 [2024-10-30 14:15:40.817786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.556 [2024-10-30 14:15:40.817797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.556 [2024-10-30 14:15:40.818036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.556 [2024-10-30 14:15:40.818257] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.556 [2024-10-30 14:15:40.818267] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.556 [2024-10-30 14:15:40.818279] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.556 [2024-10-30 14:15:40.821786] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.556 [2024-10-30 14:15:40.830844] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.556 [2024-10-30 14:15:40.831510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.556 [2024-10-30 14:15:40.831554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.556 [2024-10-30 14:15:40.831566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.556 [2024-10-30 14:15:40.831814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.556 [2024-10-30 14:15:40.832037] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.556 [2024-10-30 14:15:40.832047] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.556 [2024-10-30 14:15:40.832055] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.556 [2024-10-30 14:15:40.835554] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.556 [2024-10-30 14:15:40.844621] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.556 [2024-10-30 14:15:40.845256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.556 [2024-10-30 14:15:40.845301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.556 [2024-10-30 14:15:40.845313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.556 [2024-10-30 14:15:40.845552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.556 [2024-10-30 14:15:40.845785] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.556 [2024-10-30 14:15:40.845797] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.556 [2024-10-30 14:15:40.845805] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.820 [2024-10-30 14:15:40.849307] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.820 [2024-10-30 14:15:40.858400] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.820 [2024-10-30 14:15:40.859063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.820 [2024-10-30 14:15:40.859112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.820 [2024-10-30 14:15:40.859123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.820 [2024-10-30 14:15:40.859366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.820 [2024-10-30 14:15:40.859588] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.820 [2024-10-30 14:15:40.859599] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.820 [2024-10-30 14:15:40.859607] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.820 [2024-10-30 14:15:40.863125] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.820 [2024-10-30 14:15:40.872203] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.820 [2024-10-30 14:15:40.872864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.820 [2024-10-30 14:15:40.872915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.820 [2024-10-30 14:15:40.872928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.820 [2024-10-30 14:15:40.873172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.820 [2024-10-30 14:15:40.873396] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.820 [2024-10-30 14:15:40.873406] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.820 [2024-10-30 14:15:40.873415] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.820 [2024-10-30 14:15:40.876933] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.820 [2024-10-30 14:15:40.886010] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.820 [2024-10-30 14:15:40.886660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.820 [2024-10-30 14:15:40.886713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.820 [2024-10-30 14:15:40.886726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.820 [2024-10-30 14:15:40.886983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.820 [2024-10-30 14:15:40.887207] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.820 [2024-10-30 14:15:40.887218] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.820 [2024-10-30 14:15:40.887227] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.820 [2024-10-30 14:15:40.890736] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.820 [2024-10-30 14:15:40.899809] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.820 [2024-10-30 14:15:40.900502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.820 [2024-10-30 14:15:40.900557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.820 [2024-10-30 14:15:40.900570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.820 [2024-10-30 14:15:40.900830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.820 [2024-10-30 14:15:40.901055] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.820 [2024-10-30 14:15:40.901065] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.820 [2024-10-30 14:15:40.901074] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.820 [2024-10-30 14:15:40.904588] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.820 [2024-10-30 14:15:40.913681] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.820 [2024-10-30 14:15:40.914397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.820 [2024-10-30 14:15:40.914463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.820 [2024-10-30 14:15:40.914484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.820 [2024-10-30 14:15:40.914737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.820 [2024-10-30 14:15:40.914975] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.820 [2024-10-30 14:15:40.914989] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.820 [2024-10-30 14:15:40.914998] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.820 [2024-10-30 14:15:40.918521] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.820 [2024-10-30 14:15:40.927613] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.820 [2024-10-30 14:15:40.928194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.820 [2024-10-30 14:15:40.928225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.820 [2024-10-30 14:15:40.928236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.820 [2024-10-30 14:15:40.928456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.820 [2024-10-30 14:15:40.928676] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.820 [2024-10-30 14:15:40.928690] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.820 [2024-10-30 14:15:40.928698] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.820 [2024-10-30 14:15:40.932217] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.820 [2024-10-30 14:15:40.941504] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.820 [2024-10-30 14:15:40.942181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.820 [2024-10-30 14:15:40.942246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.820 [2024-10-30 14:15:40.942260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.820 [2024-10-30 14:15:40.942514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.820 [2024-10-30 14:15:40.942739] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.820 [2024-10-30 14:15:40.942762] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.820 [2024-10-30 14:15:40.942772] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.820 [2024-10-30 14:15:40.946301] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.820 [2024-10-30 14:15:40.955399] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.820 [2024-10-30 14:15:40.956094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.820 [2024-10-30 14:15:40.956157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.820 [2024-10-30 14:15:40.956171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.820 [2024-10-30 14:15:40.956425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.821 [2024-10-30 14:15:40.956659] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.821 [2024-10-30 14:15:40.956671] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.821 [2024-10-30 14:15:40.956681] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.821 [2024-10-30 14:15:40.960222] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.821 [2024-10-30 14:15:40.969311] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.821 [2024-10-30 14:15:40.970052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.821 [2024-10-30 14:15:40.970116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.821 [2024-10-30 14:15:40.970129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.821 [2024-10-30 14:15:40.970383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.821 [2024-10-30 14:15:40.970609] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.821 [2024-10-30 14:15:40.970621] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.821 [2024-10-30 14:15:40.970631] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.821 [2024-10-30 14:15:40.974168] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.821 [2024-10-30 14:15:40.983263] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.821 [2024-10-30 14:15:40.983861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.821 [2024-10-30 14:15:40.983926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.821 [2024-10-30 14:15:40.983941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.821 [2024-10-30 14:15:40.984195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.821 [2024-10-30 14:15:40.984420] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.821 [2024-10-30 14:15:40.984431] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.821 [2024-10-30 14:15:40.984440] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.821 [2024-10-30 14:15:40.987974] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.821 [2024-10-30 14:15:40.997067] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.821 [2024-10-30 14:15:40.997788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.821 [2024-10-30 14:15:40.997853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.821 [2024-10-30 14:15:40.997866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.821 [2024-10-30 14:15:40.998121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.821 [2024-10-30 14:15:40.998348] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.821 [2024-10-30 14:15:40.998360] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.821 [2024-10-30 14:15:40.998377] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.821 [2024-10-30 14:15:41.001908] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.821 [2024-10-30 14:15:41.011009] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.821 [2024-10-30 14:15:41.011719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.821 [2024-10-30 14:15:41.011795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.821 [2024-10-30 14:15:41.011808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.821 [2024-10-30 14:15:41.012062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.821 [2024-10-30 14:15:41.012288] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.821 [2024-10-30 14:15:41.012301] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.821 [2024-10-30 14:15:41.012310] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.821 [2024-10-30 14:15:41.015837] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.821 [2024-10-30 14:15:41.024925] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.821 [2024-10-30 14:15:41.025542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.821 [2024-10-30 14:15:41.025573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.821 [2024-10-30 14:15:41.025584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.821 [2024-10-30 14:15:41.025813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.821 [2024-10-30 14:15:41.026034] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.821 [2024-10-30 14:15:41.026045] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.821 [2024-10-30 14:15:41.026053] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.821 [2024-10-30 14:15:41.029561] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.821 [2024-10-30 14:15:41.038837] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.821 [2024-10-30 14:15:41.039498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.821 [2024-10-30 14:15:41.039562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.821 [2024-10-30 14:15:41.039576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.821 [2024-10-30 14:15:41.039839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.821 [2024-10-30 14:15:41.040066] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.821 [2024-10-30 14:15:41.040078] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.821 [2024-10-30 14:15:41.040087] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.821 [2024-10-30 14:15:41.043604] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.821 [2024-10-30 14:15:41.052734] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.821 [2024-10-30 14:15:41.053359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.821 [2024-10-30 14:15:41.053388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.821 [2024-10-30 14:15:41.053398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.821 [2024-10-30 14:15:41.053618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.821 [2024-10-30 14:15:41.053846] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.821 [2024-10-30 14:15:41.053859] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.821 [2024-10-30 14:15:41.053867] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.821 [2024-10-30 14:15:41.057390] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.821 [2024-10-30 14:15:41.066686] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.821 [2024-10-30 14:15:41.067357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.821 [2024-10-30 14:15:41.067422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.821 [2024-10-30 14:15:41.067436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.821 [2024-10-30 14:15:41.067688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.821 [2024-10-30 14:15:41.067931] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.821 [2024-10-30 14:15:41.067944] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.821 [2024-10-30 14:15:41.067953] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.822 [2024-10-30 14:15:41.071468] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.822 [2024-10-30 14:15:41.080544] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.822 [2024-10-30 14:15:41.081233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.822 [2024-10-30 14:15:41.081298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.822 [2024-10-30 14:15:41.081311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.822 [2024-10-30 14:15:41.081565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.822 [2024-10-30 14:15:41.081805] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.822 [2024-10-30 14:15:41.081818] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.822 [2024-10-30 14:15:41.081827] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.822 [2024-10-30 14:15:41.085348] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.822 [2024-10-30 14:15:41.094440] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.822 [2024-10-30 14:15:41.095123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.822 [2024-10-30 14:15:41.095187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.822 [2024-10-30 14:15:41.095208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.822 [2024-10-30 14:15:41.095461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.822 [2024-10-30 14:15:41.095687] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.822 [2024-10-30 14:15:41.095699] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.822 [2024-10-30 14:15:41.095708] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.822 [2024-10-30 14:15:41.099242] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.822 [2024-10-30 14:15:41.108435] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.822 [2024-10-30 14:15:41.109154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.822 [2024-10-30 14:15:41.109219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:42.822 [2024-10-30 14:15:41.109232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:42.822 [2024-10-30 14:15:41.109485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:42.822 [2024-10-30 14:15:41.109712] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.822 [2024-10-30 14:15:41.109724] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.822 [2024-10-30 14:15:41.109733] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.822 [2024-10-30 14:15:41.113290] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.085 [2024-10-30 14:15:41.122382] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.085 [2024-10-30 14:15:41.123142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.085 [2024-10-30 14:15:41.123207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.085 [2024-10-30 14:15:41.123220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.085 [2024-10-30 14:15:41.123474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.085 [2024-10-30 14:15:41.123699] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.085 [2024-10-30 14:15:41.123712] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.085 [2024-10-30 14:15:41.123721] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.085 [2024-10-30 14:15:41.127262] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.085 [2024-10-30 14:15:41.136132] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.085 [2024-10-30 14:15:41.136814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.085 [2024-10-30 14:15:41.136879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.085 [2024-10-30 14:15:41.136894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.085 [2024-10-30 14:15:41.137148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.085 [2024-10-30 14:15:41.137391] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.085 [2024-10-30 14:15:41.137404] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.085 [2024-10-30 14:15:41.137412] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.085 [2024-10-30 14:15:41.140947] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.085 [2024-10-30 14:15:41.150031] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.085 [2024-10-30 14:15:41.150706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.085 [2024-10-30 14:15:41.150781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.085 [2024-10-30 14:15:41.150795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.085 [2024-10-30 14:15:41.151048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.085 [2024-10-30 14:15:41.151274] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.085 [2024-10-30 14:15:41.151286] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.085 [2024-10-30 14:15:41.151295] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.085 [2024-10-30 14:15:41.154822] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.085 [2024-10-30 14:15:41.163923] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.085 [2024-10-30 14:15:41.164636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.085 [2024-10-30 14:15:41.164699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.085 [2024-10-30 14:15:41.164712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.085 [2024-10-30 14:15:41.164980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.085 [2024-10-30 14:15:41.165208] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.085 [2024-10-30 14:15:41.165219] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.085 [2024-10-30 14:15:41.165229] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.085 [2024-10-30 14:15:41.168739] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.085 [2024-10-30 14:15:41.177823] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.085 [2024-10-30 14:15:41.178494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.085 [2024-10-30 14:15:41.178558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.085 [2024-10-30 14:15:41.178571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.085 [2024-10-30 14:15:41.178838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.085 [2024-10-30 14:15:41.179065] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.085 [2024-10-30 14:15:41.179077] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.085 [2024-10-30 14:15:41.179093] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.085 [2024-10-30 14:15:41.182610] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.085 [2024-10-30 14:15:41.191689] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.085 [2024-10-30 14:15:41.192367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.085 [2024-10-30 14:15:41.192431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.085 [2024-10-30 14:15:41.192444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.085 [2024-10-30 14:15:41.192698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.085 [2024-10-30 14:15:41.192939] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.085 [2024-10-30 14:15:41.192954] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.085 [2024-10-30 14:15:41.192963] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.085 [2024-10-30 14:15:41.196482] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.085 [2024-10-30 14:15:41.205563] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.085 [2024-10-30 14:15:41.206287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.085 [2024-10-30 14:15:41.206352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.085 [2024-10-30 14:15:41.206365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.085 [2024-10-30 14:15:41.206618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.085 [2024-10-30 14:15:41.206859] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.085 [2024-10-30 14:15:41.206873] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.085 [2024-10-30 14:15:41.206882] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.085 [2024-10-30 14:15:41.210414] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.085 [2024-10-30 14:15:41.219497] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.085 [2024-10-30 14:15:41.220166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.085 [2024-10-30 14:15:41.220231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.085 [2024-10-30 14:15:41.220245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.085 [2024-10-30 14:15:41.220498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.085 [2024-10-30 14:15:41.220724] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.085 [2024-10-30 14:15:41.220737] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.085 [2024-10-30 14:15:41.220763] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.085 [2024-10-30 14:15:41.224279] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.085 [2024-10-30 14:15:41.233370] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.085 [2024-10-30 14:15:41.233982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.085 [2024-10-30 14:15:41.234014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.085 [2024-10-30 14:15:41.234024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.085 [2024-10-30 14:15:41.234245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.085 [2024-10-30 14:15:41.234465] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.085 [2024-10-30 14:15:41.234477] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.085 [2024-10-30 14:15:41.234486] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.085 [2024-10-30 14:15:41.238031] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.085 [2024-10-30 14:15:41.247318] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.085 [2024-10-30 14:15:41.247916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.086 [2024-10-30 14:15:41.247942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.086 [2024-10-30 14:15:41.247952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.086 [2024-10-30 14:15:41.248172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.086 [2024-10-30 14:15:41.248392] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.086 [2024-10-30 14:15:41.248404] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.086 [2024-10-30 14:15:41.248412] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.086 9404.33 IOPS, 36.74 MiB/s [2024-10-30T13:15:41.385Z] [2024-10-30 14:15:41.253577] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.086 [2024-10-30 14:15:41.261227] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.086 [2024-10-30 14:15:41.261843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.086 [2024-10-30 14:15:41.261889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.086 [2024-10-30 14:15:41.261899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.086 [2024-10-30 14:15:41.262137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.086 [2024-10-30 14:15:41.262359] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.086 [2024-10-30 14:15:41.262371] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.086 [2024-10-30 14:15:41.262380] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.086 [2024-10-30 14:15:41.265907] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.086 [2024-10-30 14:15:41.274971] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.086 [2024-10-30 14:15:41.275669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.086 [2024-10-30 14:15:41.275734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.086 [2024-10-30 14:15:41.275766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.086 [2024-10-30 14:15:41.276020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.086 [2024-10-30 14:15:41.276246] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.086 [2024-10-30 14:15:41.276258] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.086 [2024-10-30 14:15:41.276267] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.086 [2024-10-30 14:15:41.279791] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.086 [2024-10-30 14:15:41.288869] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.086 [2024-10-30 14:15:41.289583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.086 [2024-10-30 14:15:41.289646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.086 [2024-10-30 14:15:41.289659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.086 [2024-10-30 14:15:41.289926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.086 [2024-10-30 14:15:41.290153] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.086 [2024-10-30 14:15:41.290167] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.086 [2024-10-30 14:15:41.290176] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.086 [2024-10-30 14:15:41.293692] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.086 [2024-10-30 14:15:41.302789] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.086 [2024-10-30 14:15:41.303480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.086 [2024-10-30 14:15:41.303543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.086 [2024-10-30 14:15:41.303557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.086 [2024-10-30 14:15:41.303825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.086 [2024-10-30 14:15:41.304051] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.086 [2024-10-30 14:15:41.304063] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.086 [2024-10-30 14:15:41.304073] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.086 [2024-10-30 14:15:41.307594] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.086 [2024-10-30 14:15:41.316698] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.086 [2024-10-30 14:15:41.317412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.086 [2024-10-30 14:15:41.317477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.086 [2024-10-30 14:15:41.317491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.086 [2024-10-30 14:15:41.317744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.086 [2024-10-30 14:15:41.317993] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.086 [2024-10-30 14:15:41.318006] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.086 [2024-10-30 14:15:41.318014] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.086 [2024-10-30 14:15:41.321534] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.086 [2024-10-30 14:15:41.330619] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.086 [2024-10-30 14:15:41.331320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.086 [2024-10-30 14:15:41.331382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.086 [2024-10-30 14:15:41.331396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.086 [2024-10-30 14:15:41.331649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.086 [2024-10-30 14:15:41.331886] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.086 [2024-10-30 14:15:41.331898] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.086 [2024-10-30 14:15:41.331906] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.086 [2024-10-30 14:15:41.335425] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.086 [2024-10-30 14:15:41.344508] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.086 [2024-10-30 14:15:41.345233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.086 [2024-10-30 14:15:41.345295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.086 [2024-10-30 14:15:41.345308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.086 [2024-10-30 14:15:41.345561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.086 [2024-10-30 14:15:41.345799] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.086 [2024-10-30 14:15:41.345809] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.086 [2024-10-30 14:15:41.345818] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.086 [2024-10-30 14:15:41.349336] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.086 [2024-10-30 14:15:41.358423] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.086 [2024-10-30 14:15:41.359134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.086 [2024-10-30 14:15:41.359195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.086 [2024-10-30 14:15:41.359208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.086 [2024-10-30 14:15:41.359461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.086 [2024-10-30 14:15:41.359684] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.086 [2024-10-30 14:15:41.359695] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.086 [2024-10-30 14:15:41.359710] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.086 [2024-10-30 14:15:41.363246] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.086 [2024-10-30 14:15:41.372324] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.086 [2024-10-30 14:15:41.373061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.086 [2024-10-30 14:15:41.373124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.086 [2024-10-30 14:15:41.373137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.086 [2024-10-30 14:15:41.373389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.086 [2024-10-30 14:15:41.373613] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.086 [2024-10-30 14:15:41.373622] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.086 [2024-10-30 14:15:41.373631] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.086 [2024-10-30 14:15:41.377166] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.350 [2024-10-30 14:15:41.386247] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.350 [2024-10-30 14:15:41.386885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.350 [2024-10-30 14:15:41.386947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.350 [2024-10-30 14:15:41.386961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.350 [2024-10-30 14:15:41.387214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.350 [2024-10-30 14:15:41.387438] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.350 [2024-10-30 14:15:41.387447] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.350 [2024-10-30 14:15:41.387456] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.350 [2024-10-30 14:15:41.390995] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.350 [2024-10-30 14:15:41.400077] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.350 [2024-10-30 14:15:41.400770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.350 [2024-10-30 14:15:41.400831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.350 [2024-10-30 14:15:41.400845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.350 [2024-10-30 14:15:41.401098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.350 [2024-10-30 14:15:41.401322] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.350 [2024-10-30 14:15:41.401332] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.350 [2024-10-30 14:15:41.401340] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.350 [2024-10-30 14:15:41.404866] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.350 [2024-10-30 14:15:41.413968] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.350 [2024-10-30 14:15:41.414686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.350 [2024-10-30 14:15:41.414760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.350 [2024-10-30 14:15:41.414773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.350 [2024-10-30 14:15:41.415026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.350 [2024-10-30 14:15:41.415249] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.350 [2024-10-30 14:15:41.415258] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.350 [2024-10-30 14:15:41.415266] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.350 [2024-10-30 14:15:41.418787] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.350 [2024-10-30 14:15:41.427870] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.350 [2024-10-30 14:15:41.428576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.350 [2024-10-30 14:15:41.428637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.350 [2024-10-30 14:15:41.428649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.350 [2024-10-30 14:15:41.428916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.350 [2024-10-30 14:15:41.429141] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.350 [2024-10-30 14:15:41.429151] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.350 [2024-10-30 14:15:41.429160] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.350 [2024-10-30 14:15:41.432674] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.350 [2024-10-30 14:15:41.441750] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.350 [2024-10-30 14:15:41.442466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.350 [2024-10-30 14:15:41.442527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.350 [2024-10-30 14:15:41.442540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.350 [2024-10-30 14:15:41.442805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.350 [2024-10-30 14:15:41.443031] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.351 [2024-10-30 14:15:41.443039] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.351 [2024-10-30 14:15:41.443047] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.351 [2024-10-30 14:15:41.446561] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.351 [2024-10-30 14:15:41.455651] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.351 [2024-10-30 14:15:41.456348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.351 [2024-10-30 14:15:41.456411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.351 [2024-10-30 14:15:41.456430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.351 [2024-10-30 14:15:41.456683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.351 [2024-10-30 14:15:41.456922] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.351 [2024-10-30 14:15:41.456932] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.351 [2024-10-30 14:15:41.456940] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.351 [2024-10-30 14:15:41.460474] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.351 [2024-10-30 14:15:41.469551] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.351 [2024-10-30 14:15:41.470241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.351 [2024-10-30 14:15:41.470302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.351 [2024-10-30 14:15:41.470315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.351 [2024-10-30 14:15:41.470567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.351 [2024-10-30 14:15:41.470805] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.351 [2024-10-30 14:15:41.470816] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.351 [2024-10-30 14:15:41.470824] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.351 [2024-10-30 14:15:41.474339] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.351 [2024-10-30 14:15:41.483409] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.351 [2024-10-30 14:15:41.484099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.351 [2024-10-30 14:15:41.484161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.351 [2024-10-30 14:15:41.484174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.351 [2024-10-30 14:15:41.484426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.351 [2024-10-30 14:15:41.484649] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.351 [2024-10-30 14:15:41.484659] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.351 [2024-10-30 14:15:41.484667] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.351 [2024-10-30 14:15:41.488200] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.351 [2024-10-30 14:15:41.497272] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.351 [2024-10-30 14:15:41.498003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.351 [2024-10-30 14:15:41.498064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.351 [2024-10-30 14:15:41.498076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.351 [2024-10-30 14:15:41.498329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.351 [2024-10-30 14:15:41.498562] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.351 [2024-10-30 14:15:41.498573] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.351 [2024-10-30 14:15:41.498582] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.351 [2024-10-30 14:15:41.502120] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.351 [2024-10-30 14:15:41.511210] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.351 [2024-10-30 14:15:41.511858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.351 [2024-10-30 14:15:41.511921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.351 [2024-10-30 14:15:41.511934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.351 [2024-10-30 14:15:41.512187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.351 [2024-10-30 14:15:41.512410] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.351 [2024-10-30 14:15:41.512429] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.351 [2024-10-30 14:15:41.512439] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.351 [2024-10-30 14:15:41.515989] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.351 [2024-10-30 14:15:41.525073] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.351 [2024-10-30 14:15:41.525744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.351 [2024-10-30 14:15:41.525815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.351 [2024-10-30 14:15:41.525828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.351 [2024-10-30 14:15:41.526080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.351 [2024-10-30 14:15:41.526304] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.351 [2024-10-30 14:15:41.526313] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.351 [2024-10-30 14:15:41.526321] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.351 [2024-10-30 14:15:41.529839] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.351 [2024-10-30 14:15:41.538924] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.351 [2024-10-30 14:15:41.539593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.351 [2024-10-30 14:15:41.539655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.351 [2024-10-30 14:15:41.539667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.351 [2024-10-30 14:15:41.539935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.351 [2024-10-30 14:15:41.540161] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.351 [2024-10-30 14:15:41.540170] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.351 [2024-10-30 14:15:41.540186] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.351 [2024-10-30 14:15:41.543707] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.351 [2024-10-30 14:15:41.552795] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.351 [2024-10-30 14:15:41.553462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.351 [2024-10-30 14:15:41.553522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.351 [2024-10-30 14:15:41.553535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.351 [2024-10-30 14:15:41.553802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.351 [2024-10-30 14:15:41.554027] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.352 [2024-10-30 14:15:41.554037] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.352 [2024-10-30 14:15:41.554045] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.352 [2024-10-30 14:15:41.557562] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.352 [2024-10-30 14:15:41.566655] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.352 [2024-10-30 14:15:41.567384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.352 [2024-10-30 14:15:41.567446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.352 [2024-10-30 14:15:41.567458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.352 [2024-10-30 14:15:41.567710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.352 [2024-10-30 14:15:41.567948] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.352 [2024-10-30 14:15:41.567958] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.352 [2024-10-30 14:15:41.567967] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.352 [2024-10-30 14:15:41.571483] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.352 [2024-10-30 14:15:41.580565] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.352 [2024-10-30 14:15:41.581142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.352 [2024-10-30 14:15:41.581171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.352 [2024-10-30 14:15:41.581180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.352 [2024-10-30 14:15:41.581400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.352 [2024-10-30 14:15:41.581617] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.352 [2024-10-30 14:15:41.581627] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.352 [2024-10-30 14:15:41.581635] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.352 [2024-10-30 14:15:41.585153] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.352 [2024-10-30 14:15:41.594441] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.352 [2024-10-30 14:15:41.595111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.352 [2024-10-30 14:15:41.595174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.352 [2024-10-30 14:15:41.595186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.352 [2024-10-30 14:15:41.595440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.352 [2024-10-30 14:15:41.595664] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.352 [2024-10-30 14:15:41.595673] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.352 [2024-10-30 14:15:41.595681] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.352 [2024-10-30 14:15:41.599213] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.352 [2024-10-30 14:15:41.607113] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.352 [2024-10-30 14:15:41.607675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.352 [2024-10-30 14:15:41.607730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.352 [2024-10-30 14:15:41.607740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.352 [2024-10-30 14:15:41.607938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.352 [2024-10-30 14:15:41.608097] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.352 [2024-10-30 14:15:41.608104] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.352 [2024-10-30 14:15:41.608110] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.352 [2024-10-30 14:15:41.610531] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.352 [2024-10-30 14:15:41.619738] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.352 [2024-10-30 14:15:41.620332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.352 [2024-10-30 14:15:41.620382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.352 [2024-10-30 14:15:41.620391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.352 [2024-10-30 14:15:41.620571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.352 [2024-10-30 14:15:41.620725] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.352 [2024-10-30 14:15:41.620732] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.352 [2024-10-30 14:15:41.620738] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.352 [2024-10-30 14:15:41.623167] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.352 [2024-10-30 14:15:41.632346] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.352 [2024-10-30 14:15:41.632856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.352 [2024-10-30 14:15:41.632900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.352 [2024-10-30 14:15:41.632915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.352 [2024-10-30 14:15:41.633090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.352 [2024-10-30 14:15:41.633244] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.352 [2024-10-30 14:15:41.633250] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.352 [2024-10-30 14:15:41.633256] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.352 [2024-10-30 14:15:41.635678] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.352 [2024-10-30 14:15:41.645010] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.352 [2024-10-30 14:15:41.645474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.352 [2024-10-30 14:15:41.645518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.352 [2024-10-30 14:15:41.645527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.352 [2024-10-30 14:15:41.645699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.352 [2024-10-30 14:15:41.645860] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.352 [2024-10-30 14:15:41.645867] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.352 [2024-10-30 14:15:41.645874] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.616 [2024-10-30 14:15:41.648284] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.616 [2024-10-30 14:15:41.657616] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.616 [2024-10-30 14:15:41.658192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.616 [2024-10-30 14:15:41.658232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.616 [2024-10-30 14:15:41.658241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.616 [2024-10-30 14:15:41.658412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.616 [2024-10-30 14:15:41.658575] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.616 [2024-10-30 14:15:41.658582] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.616 [2024-10-30 14:15:41.658588] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.616 [2024-10-30 14:15:41.661005] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.616 [2024-10-30 14:15:41.670324] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.616 [2024-10-30 14:15:41.670815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.616 [2024-10-30 14:15:41.670844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.616 [2024-10-30 14:15:41.670851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.616 [2024-10-30 14:15:41.671012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.616 [2024-10-30 14:15:41.671167] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.616 [2024-10-30 14:15:41.671173] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.616 [2024-10-30 14:15:41.671179] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.616 [2024-10-30 14:15:41.673586] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.616 [2024-10-30 14:15:41.682902] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.616 [2024-10-30 14:15:41.683459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.616 [2024-10-30 14:15:41.683495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.616 [2024-10-30 14:15:41.683504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.616 [2024-10-30 14:15:41.683672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.616 [2024-10-30 14:15:41.683832] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.616 [2024-10-30 14:15:41.683839] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.616 [2024-10-30 14:15:41.683845] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.616 [2024-10-30 14:15:41.686251] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.616 [2024-10-30 14:15:41.695573] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.616 [2024-10-30 14:15:41.696050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.616 [2024-10-30 14:15:41.696067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.616 [2024-10-30 14:15:41.696073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.616 [2024-10-30 14:15:41.696223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.616 [2024-10-30 14:15:41.696372] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.616 [2024-10-30 14:15:41.696378] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.616 [2024-10-30 14:15:41.696384] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.616 [2024-10-30 14:15:41.698788] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.616 [2024-10-30 14:15:41.708242] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.616 [2024-10-30 14:15:41.708842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.616 [2024-10-30 14:15:41.708876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.616 [2024-10-30 14:15:41.708884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.616 [2024-10-30 14:15:41.709051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.616 [2024-10-30 14:15:41.709203] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.616 [2024-10-30 14:15:41.709212] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.616 [2024-10-30 14:15:41.709222] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.616 [2024-10-30 14:15:41.711634] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.616 [2024-10-30 14:15:41.720825] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.616 [2024-10-30 14:15:41.721410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.616 [2024-10-30 14:15:41.721443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.616 [2024-10-30 14:15:41.721451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.616 [2024-10-30 14:15:41.721617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.616 [2024-10-30 14:15:41.721776] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.616 [2024-10-30 14:15:41.721783] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.616 [2024-10-30 14:15:41.721788] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.616 [2024-10-30 14:15:41.724189] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.616 [2024-10-30 14:15:41.733401] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.616 [2024-10-30 14:15:41.733850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.616 [2024-10-30 14:15:41.733883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.616 [2024-10-30 14:15:41.733891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.616 [2024-10-30 14:15:41.734056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.616 [2024-10-30 14:15:41.734208] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.617 [2024-10-30 14:15:41.734215] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.617 [2024-10-30 14:15:41.734220] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.617 [2024-10-30 14:15:41.736628] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.617 [2024-10-30 14:15:41.746085] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.617 [2024-10-30 14:15:41.746635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.617 [2024-10-30 14:15:41.746665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.617 [2024-10-30 14:15:41.746673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.617 [2024-10-30 14:15:41.746845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.617 [2024-10-30 14:15:41.746997] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.617 [2024-10-30 14:15:41.747003] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.617 [2024-10-30 14:15:41.747008] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.617 [2024-10-30 14:15:41.749406] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.617 [2024-10-30 14:15:41.758723] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.617 [2024-10-30 14:15:41.759283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.617 [2024-10-30 14:15:41.759313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.617 [2024-10-30 14:15:41.759322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.617 [2024-10-30 14:15:41.759486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.617 [2024-10-30 14:15:41.759638] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.617 [2024-10-30 14:15:41.759645] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.617 [2024-10-30 14:15:41.759650] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.617 [2024-10-30 14:15:41.762059] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.617 [2024-10-30 14:15:41.771378] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.617 [2024-10-30 14:15:41.771885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.617 [2024-10-30 14:15:41.771916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.617 [2024-10-30 14:15:41.771925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.617 [2024-10-30 14:15:41.772091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.617 [2024-10-30 14:15:41.772243] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.617 [2024-10-30 14:15:41.772249] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.617 [2024-10-30 14:15:41.772254] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.617 [2024-10-30 14:15:41.774658] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.617 [2024-10-30 14:15:41.783978] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.617 [2024-10-30 14:15:41.784439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.617 [2024-10-30 14:15:41.784455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.617 [2024-10-30 14:15:41.784460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.617 [2024-10-30 14:15:41.784609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.617 [2024-10-30 14:15:41.784762] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.617 [2024-10-30 14:15:41.784769] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.617 [2024-10-30 14:15:41.784774] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.617 [2024-10-30 14:15:41.787184] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.617 [2024-10-30 14:15:41.796565] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.617 [2024-10-30 14:15:41.797017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.617 [2024-10-30 14:15:41.797033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.617 [2024-10-30 14:15:41.797045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.617 [2024-10-30 14:15:41.797193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.617 [2024-10-30 14:15:41.797342] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.617 [2024-10-30 14:15:41.797347] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.617 [2024-10-30 14:15:41.797352] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.617 [2024-10-30 14:15:41.799744] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.617 [2024-10-30 14:15:41.809191] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.617 [2024-10-30 14:15:41.809759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.617 [2024-10-30 14:15:41.809789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.617 [2024-10-30 14:15:41.809797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.617 [2024-10-30 14:15:41.809964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.617 [2024-10-30 14:15:41.810115] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.617 [2024-10-30 14:15:41.810121] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.617 [2024-10-30 14:15:41.810127] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.617 [2024-10-30 14:15:41.812526] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.617 [2024-10-30 14:15:41.821853] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.617 [2024-10-30 14:15:41.822338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.617 [2024-10-30 14:15:41.822353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.617 [2024-10-30 14:15:41.822359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.617 [2024-10-30 14:15:41.822507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.617 [2024-10-30 14:15:41.822656] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.617 [2024-10-30 14:15:41.822661] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.617 [2024-10-30 14:15:41.822667] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.617 [2024-10-30 14:15:41.825069] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.617 [2024-10-30 14:15:41.834519] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.617 [2024-10-30 14:15:41.835080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.617 [2024-10-30 14:15:41.835110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.617 [2024-10-30 14:15:41.835119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.617 [2024-10-30 14:15:41.835283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.617 [2024-10-30 14:15:41.835438] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.617 [2024-10-30 14:15:41.835444] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.617 [2024-10-30 14:15:41.835449] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.617 [2024-10-30 14:15:41.837857] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.617 [2024-10-30 14:15:41.847174] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.617 [2024-10-30 14:15:41.847704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.618 [2024-10-30 14:15:41.847719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.618 [2024-10-30 14:15:41.847725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.618 [2024-10-30 14:15:41.847878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.618 [2024-10-30 14:15:41.848027] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.618 [2024-10-30 14:15:41.848032] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.618 [2024-10-30 14:15:41.848037] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.618 [2024-10-30 14:15:41.850426] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.618 [2024-10-30 14:15:41.859877] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.618 [2024-10-30 14:15:41.860362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.618 [2024-10-30 14:15:41.860374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.618 [2024-10-30 14:15:41.860380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.618 [2024-10-30 14:15:41.860528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.618 [2024-10-30 14:15:41.860677] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.618 [2024-10-30 14:15:41.860682] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.618 [2024-10-30 14:15:41.860687] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.618 [2024-10-30 14:15:41.863085] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.618 [2024-10-30 14:15:41.872540] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.618 [2024-10-30 14:15:41.873083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.618 [2024-10-30 14:15:41.873113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.618 [2024-10-30 14:15:41.873121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.618 [2024-10-30 14:15:41.873285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.618 [2024-10-30 14:15:41.873437] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.618 [2024-10-30 14:15:41.873443] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.618 [2024-10-30 14:15:41.873451] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.618 [2024-10-30 14:15:41.875859] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.618 [2024-10-30 14:15:41.885179] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.618 [2024-10-30 14:15:41.885751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.618 [2024-10-30 14:15:41.885781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.618 [2024-10-30 14:15:41.885789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.618 [2024-10-30 14:15:41.885953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.618 [2024-10-30 14:15:41.886105] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.618 [2024-10-30 14:15:41.886111] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.618 [2024-10-30 14:15:41.886116] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.618 [2024-10-30 14:15:41.888515] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.618 [2024-10-30 14:15:41.897833] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.618 [2024-10-30 14:15:41.898395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.618 [2024-10-30 14:15:41.898425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.618 [2024-10-30 14:15:41.898434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.618 [2024-10-30 14:15:41.898598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.618 [2024-10-30 14:15:41.898757] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.618 [2024-10-30 14:15:41.898764] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.618 [2024-10-30 14:15:41.898769] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.618 [2024-10-30 14:15:41.901167] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.618 [2024-10-30 14:15:41.910490] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.618 [2024-10-30 14:15:41.911047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.618 [2024-10-30 14:15:41.911079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.618 [2024-10-30 14:15:41.911087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.618 [2024-10-30 14:15:41.911251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.618 [2024-10-30 14:15:41.911402] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.618 [2024-10-30 14:15:41.911409] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.618 [2024-10-30 14:15:41.911414] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.881 [2024-10-30 14:15:41.913830] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.881 [2024-10-30 14:15:41.923152] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.881 [2024-10-30 14:15:41.923781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.881 [2024-10-30 14:15:41.923811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.881 [2024-10-30 14:15:41.923819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.881 [2024-10-30 14:15:41.923986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.881 [2024-10-30 14:15:41.924137] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.881 [2024-10-30 14:15:41.924143] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.881 [2024-10-30 14:15:41.924149] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.881 [2024-10-30 14:15:41.926549] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.881 [2024-10-30 14:15:41.935724] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.882 [2024-10-30 14:15:41.936273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.882 [2024-10-30 14:15:41.936303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.882 [2024-10-30 14:15:41.936312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.882 [2024-10-30 14:15:41.936477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.882 [2024-10-30 14:15:41.936628] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.882 [2024-10-30 14:15:41.936634] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.882 [2024-10-30 14:15:41.936639] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.882 [2024-10-30 14:15:41.939040] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.882 [2024-10-30 14:15:41.948353] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.882 [2024-10-30 14:15:41.948997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.882 [2024-10-30 14:15:41.949027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.882 [2024-10-30 14:15:41.949036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.882 [2024-10-30 14:15:41.949201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.882 [2024-10-30 14:15:41.949352] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.882 [2024-10-30 14:15:41.949358] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.882 [2024-10-30 14:15:41.949364] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.882 [2024-10-30 14:15:41.951767] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.882 [2024-10-30 14:15:41.960949] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.882 [2024-10-30 14:15:41.961537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.882 [2024-10-30 14:15:41.961571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.882 [2024-10-30 14:15:41.961579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.882 [2024-10-30 14:15:41.961744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.882 [2024-10-30 14:15:41.961902] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.882 [2024-10-30 14:15:41.961908] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.882 [2024-10-30 14:15:41.961913] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.882 [2024-10-30 14:15:41.964314] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.882 [2024-10-30 14:15:41.973632] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.882 [2024-10-30 14:15:41.974282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.882 [2024-10-30 14:15:41.974313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.882 [2024-10-30 14:15:41.974321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.882 [2024-10-30 14:15:41.974486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.882 [2024-10-30 14:15:41.974637] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.882 [2024-10-30 14:15:41.974643] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.882 [2024-10-30 14:15:41.974648] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.882 [2024-10-30 14:15:41.977050] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.882 [2024-10-30 14:15:41.986212] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.882 [2024-10-30 14:15:41.986698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.882 [2024-10-30 14:15:41.986712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.882 [2024-10-30 14:15:41.986718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.882 [2024-10-30 14:15:41.986875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.882 [2024-10-30 14:15:41.987024] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.882 [2024-10-30 14:15:41.987029] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.882 [2024-10-30 14:15:41.987035] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.882 [2024-10-30 14:15:41.989432] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.882 [2024-10-30 14:15:41.998881] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.882 [2024-10-30 14:15:41.999425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.882 [2024-10-30 14:15:41.999455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.882 [2024-10-30 14:15:41.999464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.882 [2024-10-30 14:15:41.999632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.882 [2024-10-30 14:15:41.999791] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.882 [2024-10-30 14:15:41.999799] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.882 [2024-10-30 14:15:41.999804] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.882 [2024-10-30 14:15:42.002204] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.882 [2024-10-30 14:15:42.011533] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.882 [2024-10-30 14:15:42.011979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.882 [2024-10-30 14:15:42.011995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.882 [2024-10-30 14:15:42.012001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.882 [2024-10-30 14:15:42.012150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.882 [2024-10-30 14:15:42.012298] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.882 [2024-10-30 14:15:42.012305] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.882 [2024-10-30 14:15:42.012310] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.882 [2024-10-30 14:15:42.014715] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.882 [2024-10-30 14:15:42.024195] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.882 [2024-10-30 14:15:42.024679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.882 [2024-10-30 14:15:42.024691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.882 [2024-10-30 14:15:42.024697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.882 [2024-10-30 14:15:42.024851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.882 [2024-10-30 14:15:42.025001] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.882 [2024-10-30 14:15:42.025008] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.882 [2024-10-30 14:15:42.025013] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.882 [2024-10-30 14:15:42.027410] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.882 [2024-10-30 14:15:42.036880] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.882 [2024-10-30 14:15:42.037226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.882 [2024-10-30 14:15:42.037238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.882 [2024-10-30 14:15:42.037244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.883 [2024-10-30 14:15:42.037393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.883 [2024-10-30 14:15:42.037541] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.883 [2024-10-30 14:15:42.037547] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.883 [2024-10-30 14:15:42.037556] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.883 [2024-10-30 14:15:42.039959] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.883 [2024-10-30 14:15:42.049562] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.883 [2024-10-30 14:15:42.050108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.883 [2024-10-30 14:15:42.050139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.883 [2024-10-30 14:15:42.050147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.883 [2024-10-30 14:15:42.050311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.883 [2024-10-30 14:15:42.050463] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.883 [2024-10-30 14:15:42.050469] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.883 [2024-10-30 14:15:42.050475] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.883 [2024-10-30 14:15:42.052880] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.883 [2024-10-30 14:15:42.062206] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.883 [2024-10-30 14:15:42.062775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.883 [2024-10-30 14:15:42.062805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.883 [2024-10-30 14:15:42.062814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.883 [2024-10-30 14:15:42.062980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.883 [2024-10-30 14:15:42.063132] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.883 [2024-10-30 14:15:42.063138] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.883 [2024-10-30 14:15:42.063143] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.883 [2024-10-30 14:15:42.065545] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.883 [2024-10-30 14:15:42.074888] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.883 [2024-10-30 14:15:42.075375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.883 [2024-10-30 14:15:42.075390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.883 [2024-10-30 14:15:42.075396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.883 [2024-10-30 14:15:42.075545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.883 [2024-10-30 14:15:42.075694] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.883 [2024-10-30 14:15:42.075699] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.883 [2024-10-30 14:15:42.075704] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.883 [2024-10-30 14:15:42.078105] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.883 [2024-10-30 14:15:42.087561] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.883 [2024-10-30 14:15:42.087907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.883 [2024-10-30 14:15:42.087922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.883 [2024-10-30 14:15:42.087928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.883 [2024-10-30 14:15:42.088077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.883 [2024-10-30 14:15:42.088226] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.883 [2024-10-30 14:15:42.088231] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.883 [2024-10-30 14:15:42.088236] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.883 [2024-10-30 14:15:42.090635] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.883 [2024-10-30 14:15:42.100243] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.883 [2024-10-30 14:15:42.100717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.883 [2024-10-30 14:15:42.100729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.883 [2024-10-30 14:15:42.100735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.883 [2024-10-30 14:15:42.100887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.883 [2024-10-30 14:15:42.101035] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.883 [2024-10-30 14:15:42.101041] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.883 [2024-10-30 14:15:42.101046] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.883 [2024-10-30 14:15:42.103439] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.883 [2024-10-30 14:15:42.112904] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.883 [2024-10-30 14:15:42.113221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.883 [2024-10-30 14:15:42.113233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.883 [2024-10-30 14:15:42.113239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.883 [2024-10-30 14:15:42.113388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.883 [2024-10-30 14:15:42.113536] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.883 [2024-10-30 14:15:42.113542] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.883 [2024-10-30 14:15:42.113547] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.883 [2024-10-30 14:15:42.115948] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.883 [2024-10-30 14:15:42.125547] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.883 [2024-10-30 14:15:42.125845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.883 [2024-10-30 14:15:42.125861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.883 [2024-10-30 14:15:42.125866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.883 [2024-10-30 14:15:42.126015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.883 [2024-10-30 14:15:42.126163] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.883 [2024-10-30 14:15:42.126168] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.883 [2024-10-30 14:15:42.126173] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.883 [2024-10-30 14:15:42.128569] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.883 [2024-10-30 14:15:42.138172] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.883 [2024-10-30 14:15:42.138658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.883 [2024-10-30 14:15:42.138670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.883 [2024-10-30 14:15:42.138676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.883 [2024-10-30 14:15:42.138828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.883 [2024-10-30 14:15:42.138977] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.883 [2024-10-30 14:15:42.138982] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.883 [2024-10-30 14:15:42.138987] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.884 [2024-10-30 14:15:42.141379] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.884 [2024-10-30 14:15:42.150828] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.884 [2024-10-30 14:15:42.151300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.884 [2024-10-30 14:15:42.151314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.884 [2024-10-30 14:15:42.151319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.884 [2024-10-30 14:15:42.151467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.884 [2024-10-30 14:15:42.151616] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.884 [2024-10-30 14:15:42.151621] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.884 [2024-10-30 14:15:42.151626] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.884 [2024-10-30 14:15:42.154027] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.884 [2024-10-30 14:15:42.163491] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.884 [2024-10-30 14:15:42.163955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.884 [2024-10-30 14:15:42.163968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.884 [2024-10-30 14:15:42.163973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.884 [2024-10-30 14:15:42.164125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.884 [2024-10-30 14:15:42.164273] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.884 [2024-10-30 14:15:42.164279] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.884 [2024-10-30 14:15:42.164284] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.884 [2024-10-30 14:15:42.166679] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.884 [2024-10-30 14:15:42.176144] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.884 [2024-10-30 14:15:42.176627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.884 [2024-10-30 14:15:42.176639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:43.884 [2024-10-30 14:15:42.176644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:43.884 [2024-10-30 14:15:42.176798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:43.884 [2024-10-30 14:15:42.176946] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.884 [2024-10-30 14:15:42.176952] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.884 [2024-10-30 14:15:42.176957] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.176 [2024-10-30 14:15:42.179351] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.176 [2024-10-30 14:15:42.188802] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.176 [2024-10-30 14:15:42.189395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.176 [2024-10-30 14:15:42.189425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.176 [2024-10-30 14:15:42.189433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.176 [2024-10-30 14:15:42.189598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.176 [2024-10-30 14:15:42.189757] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.176 [2024-10-30 14:15:42.189764] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.176 [2024-10-30 14:15:42.189770] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.176 [2024-10-30 14:15:42.192169] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.176 [2024-10-30 14:15:42.201496] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.176 [2024-10-30 14:15:42.201958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.176 [2024-10-30 14:15:42.201974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.176 [2024-10-30 14:15:42.201980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.176 [2024-10-30 14:15:42.202129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.176 [2024-10-30 14:15:42.202278] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.176 [2024-10-30 14:15:42.202284] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.176 [2024-10-30 14:15:42.202292] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.176 [2024-10-30 14:15:42.204691] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.177 [2024-10-30 14:15:42.214159] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.177 [2024-10-30 14:15:42.214538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.177 [2024-10-30 14:15:42.214550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.177 [2024-10-30 14:15:42.214556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.177 [2024-10-30 14:15:42.214704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.177 [2024-10-30 14:15:42.214858] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.177 [2024-10-30 14:15:42.214864] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.177 [2024-10-30 14:15:42.214869] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.177 [2024-10-30 14:15:42.217264] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.177 [2024-10-30 14:15:42.226729] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.177 [2024-10-30 14:15:42.227072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.177 [2024-10-30 14:15:42.227086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.177 [2024-10-30 14:15:42.227092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.177 [2024-10-30 14:15:42.227241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.177 [2024-10-30 14:15:42.227389] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.177 [2024-10-30 14:15:42.227394] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.177 [2024-10-30 14:15:42.227399] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.177 [2024-10-30 14:15:42.229796] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.177 [2024-10-30 14:15:42.239391] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.177 [2024-10-30 14:15:42.239843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.177 [2024-10-30 14:15:42.239855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.177 [2024-10-30 14:15:42.239860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.177 [2024-10-30 14:15:42.240009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.177 [2024-10-30 14:15:42.240157] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.177 [2024-10-30 14:15:42.240162] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.177 [2024-10-30 14:15:42.240167] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.177 [2024-10-30 14:15:42.242559] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.177 7053.25 IOPS, 27.55 MiB/s [2024-10-30T13:15:42.476Z] [2024-10-30 14:15:42.253151] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.177 [2024-10-30 14:15:42.253643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.177 [2024-10-30 14:15:42.253654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.177 [2024-10-30 14:15:42.253660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.177 [2024-10-30 14:15:42.253813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.177 [2024-10-30 14:15:42.253962] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.177 [2024-10-30 14:15:42.253968] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.177 [2024-10-30 14:15:42.253973] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.177 [2024-10-30 14:15:42.256367] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.177 [2024-10-30 14:15:42.265833] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.177 [2024-10-30 14:15:42.266322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.177 [2024-10-30 14:15:42.266334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.177 [2024-10-30 14:15:42.266340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.177 [2024-10-30 14:15:42.266488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.177 [2024-10-30 14:15:42.266636] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.177 [2024-10-30 14:15:42.266642] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.177 [2024-10-30 14:15:42.266647] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.177 [2024-10-30 14:15:42.269045] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.177 [2024-10-30 14:15:42.278508] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.177 [2024-10-30 14:15:42.279008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.177 [2024-10-30 14:15:42.279022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.177 [2024-10-30 14:15:42.279027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.177 [2024-10-30 14:15:42.279175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.177 [2024-10-30 14:15:42.279324] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.177 [2024-10-30 14:15:42.279329] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.177 [2024-10-30 14:15:42.279334] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.177 [2024-10-30 14:15:42.281729] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.177 [2024-10-30 14:15:42.291196] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.177 [2024-10-30 14:15:42.291803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.177 [2024-10-30 14:15:42.291837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.177 [2024-10-30 14:15:42.291846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.177 [2024-10-30 14:15:42.292013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.177 [2024-10-30 14:15:42.292164] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.177 [2024-10-30 14:15:42.292170] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.177 [2024-10-30 14:15:42.292176] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.177 [2024-10-30 14:15:42.294578] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.177 [2024-10-30 14:15:42.303895] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.177 [2024-10-30 14:15:42.304374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.177 [2024-10-30 14:15:42.304388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.177 [2024-10-30 14:15:42.304394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.177 [2024-10-30 14:15:42.304543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.177 [2024-10-30 14:15:42.304692] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.177 [2024-10-30 14:15:42.304698] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.177 [2024-10-30 14:15:42.304703] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.177 [2024-10-30 14:15:42.307101] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.178 [2024-10-30 14:15:42.316563] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.178 [2024-10-30 14:15:42.317051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.178 [2024-10-30 14:15:42.317064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.178 [2024-10-30 14:15:42.317070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.178 [2024-10-30 14:15:42.317218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.178 [2024-10-30 14:15:42.317366] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.178 [2024-10-30 14:15:42.317372] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.178 [2024-10-30 14:15:42.317377] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.178 [2024-10-30 14:15:42.319771] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.178 [2024-10-30 14:15:42.329213] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.178 [2024-10-30 14:15:42.329661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.178 [2024-10-30 14:15:42.329674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.178 [2024-10-30 14:15:42.329679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.178 [2024-10-30 14:15:42.329834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.178 [2024-10-30 14:15:42.329984] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.178 [2024-10-30 14:15:42.329989] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.178 [2024-10-30 14:15:42.329994] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.178 [2024-10-30 14:15:42.332385] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.178 [2024-10-30 14:15:42.341834] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.178 [2024-10-30 14:15:42.342316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.178 [2024-10-30 14:15:42.342327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.178 [2024-10-30 14:15:42.342333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.178 [2024-10-30 14:15:42.342481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.178 [2024-10-30 14:15:42.342629] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.178 [2024-10-30 14:15:42.342635] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.178 [2024-10-30 14:15:42.342640] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.178 [2024-10-30 14:15:42.345035] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.178 [2024-10-30 14:15:42.354478] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.178 [2024-10-30 14:15:42.354862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.178 [2024-10-30 14:15:42.354892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.178 [2024-10-30 14:15:42.354901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.178 [2024-10-30 14:15:42.355068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.178 [2024-10-30 14:15:42.355219] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.178 [2024-10-30 14:15:42.355225] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.178 [2024-10-30 14:15:42.355231] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.178 [2024-10-30 14:15:42.357637] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.178 [2024-10-30 14:15:42.367104] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.178 [2024-10-30 14:15:42.367582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.178 [2024-10-30 14:15:42.367597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.178 [2024-10-30 14:15:42.367603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.178 [2024-10-30 14:15:42.367758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.178 [2024-10-30 14:15:42.367907] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.178 [2024-10-30 14:15:42.367916] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.178 [2024-10-30 14:15:42.367921] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.178 [2024-10-30 14:15:42.370312] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.178 [2024-10-30 14:15:42.379760] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.178 [2024-10-30 14:15:42.380244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.178 [2024-10-30 14:15:42.380274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.178 [2024-10-30 14:15:42.380282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.178 [2024-10-30 14:15:42.380447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.178 [2024-10-30 14:15:42.380599] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.178 [2024-10-30 14:15:42.380605] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.178 [2024-10-30 14:15:42.380610] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.178 [2024-10-30 14:15:42.383018] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.178 [2024-10-30 14:15:42.392330] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.178 [2024-10-30 14:15:42.392886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.178 [2024-10-30 14:15:42.392916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.178 [2024-10-30 14:15:42.392925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.178 [2024-10-30 14:15:42.393092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.178 [2024-10-30 14:15:42.393243] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.178 [2024-10-30 14:15:42.393249] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.178 [2024-10-30 14:15:42.393254] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.178 [2024-10-30 14:15:42.395656] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.178 [2024-10-30 14:15:42.404971] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.178 [2024-10-30 14:15:42.405488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.178 [2024-10-30 14:15:42.405503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.178 [2024-10-30 14:15:42.405509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.178 [2024-10-30 14:15:42.405658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.178 [2024-10-30 14:15:42.405811] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.178 [2024-10-30 14:15:42.405817] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.178 [2024-10-30 14:15:42.405822] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.178 [2024-10-30 14:15:42.408221] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.178 [2024-10-30 14:15:42.417540] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.178 [2024-10-30 14:15:42.418115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.179 [2024-10-30 14:15:42.418146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.179 [2024-10-30 14:15:42.418154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.179 [2024-10-30 14:15:42.418318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.179 [2024-10-30 14:15:42.418469] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.179 [2024-10-30 14:15:42.418475] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.179 [2024-10-30 14:15:42.418481] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.179 [2024-10-30 14:15:42.420884] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.179 [2024-10-30 14:15:42.430196] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.179 [2024-10-30 14:15:42.430644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.179 [2024-10-30 14:15:42.430659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.179 [2024-10-30 14:15:42.430664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.179 [2024-10-30 14:15:42.430818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.179 [2024-10-30 14:15:42.430967] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.179 [2024-10-30 14:15:42.430972] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.179 [2024-10-30 14:15:42.430977] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.179 [2024-10-30 14:15:42.433365] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.179 [2024-10-30 14:15:42.442809] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.179 [2024-10-30 14:15:42.443352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.179 [2024-10-30 14:15:42.443381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.179 [2024-10-30 14:15:42.443390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.179 [2024-10-30 14:15:42.443554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.179 [2024-10-30 14:15:42.443705] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.179 [2024-10-30 14:15:42.443711] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.179 [2024-10-30 14:15:42.443717] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.179 [2024-10-30 14:15:42.446123] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.179 [2024-10-30 14:15:42.455463] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.179 [2024-10-30 14:15:42.456029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.179 [2024-10-30 14:15:42.456063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.179 [2024-10-30 14:15:42.456071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.179 [2024-10-30 14:15:42.456235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.179 [2024-10-30 14:15:42.456387] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.179 [2024-10-30 14:15:42.456393] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.179 [2024-10-30 14:15:42.456398] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.179 [2024-10-30 14:15:42.458798] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.179 [2024-10-30 14:15:42.468118] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.179 [2024-10-30 14:15:42.468685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.179 [2024-10-30 14:15:42.468714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.179 [2024-10-30 14:15:42.468723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.179 [2024-10-30 14:15:42.468898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.179 [2024-10-30 14:15:42.469050] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.179 [2024-10-30 14:15:42.469056] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.179 [2024-10-30 14:15:42.469062] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.179 [2024-10-30 14:15:42.471462] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.444 [2024-10-30 14:15:42.480780] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.444 [2024-10-30 14:15:42.481369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.444 [2024-10-30 14:15:42.481400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.444 [2024-10-30 14:15:42.481408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.444 [2024-10-30 14:15:42.481573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.444 [2024-10-30 14:15:42.481724] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.444 [2024-10-30 14:15:42.481730] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.444 [2024-10-30 14:15:42.481736] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.444 [2024-10-30 14:15:42.484144] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.444 [2024-10-30 14:15:42.493448] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.444 [2024-10-30 14:15:42.494028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.444 [2024-10-30 14:15:42.494059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.444 [2024-10-30 14:15:42.494067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.444 [2024-10-30 14:15:42.494236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.444 [2024-10-30 14:15:42.494387] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.444 [2024-10-30 14:15:42.494393] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.444 [2024-10-30 14:15:42.494398] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.444 [2024-10-30 14:15:42.496799] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.444 [2024-10-30 14:15:42.506102] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.444 [2024-10-30 14:15:42.506676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.444 [2024-10-30 14:15:42.506706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.444 [2024-10-30 14:15:42.506714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.444 [2024-10-30 14:15:42.506885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.444 [2024-10-30 14:15:42.507037] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.444 [2024-10-30 14:15:42.507043] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.444 [2024-10-30 14:15:42.507048] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.444 [2024-10-30 14:15:42.509445] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.444 [2024-10-30 14:15:42.518674] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.444 [2024-10-30 14:15:42.519248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.444 [2024-10-30 14:15:42.519278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.444 [2024-10-30 14:15:42.519287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.444 [2024-10-30 14:15:42.519451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.444 [2024-10-30 14:15:42.519602] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.444 [2024-10-30 14:15:42.519609] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.444 [2024-10-30 14:15:42.519615] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.444 [2024-10-30 14:15:42.522026] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.444 [2024-10-30 14:15:42.531346] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.444 [2024-10-30 14:15:42.531920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.444 [2024-10-30 14:15:42.531950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.444 [2024-10-30 14:15:42.531959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.444 [2024-10-30 14:15:42.532125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.444 [2024-10-30 14:15:42.532277] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.444 [2024-10-30 14:15:42.532286] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.444 [2024-10-30 14:15:42.532292] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.444 [2024-10-30 14:15:42.534693] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.444 [2024-10-30 14:15:42.543998] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.444 [2024-10-30 14:15:42.544480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.444 [2024-10-30 14:15:42.544494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.444 [2024-10-30 14:15:42.544500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.444 [2024-10-30 14:15:42.544649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.444 [2024-10-30 14:15:42.544803] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.444 [2024-10-30 14:15:42.544809] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.444 [2024-10-30 14:15:42.544814] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.444 [2024-10-30 14:15:42.547203] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.444 [2024-10-30 14:15:42.556647] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.444 [2024-10-30 14:15:42.557145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.444 [2024-10-30 14:15:42.557159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.444 [2024-10-30 14:15:42.557164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.444 [2024-10-30 14:15:42.557312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.444 [2024-10-30 14:15:42.557461] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.444 [2024-10-30 14:15:42.557466] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.444 [2024-10-30 14:15:42.557471] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.444 [2024-10-30 14:15:42.559868] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.444 [2024-10-30 14:15:42.569310] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.444 [2024-10-30 14:15:42.569792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.444 [2024-10-30 14:15:42.569812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.444 [2024-10-30 14:15:42.569818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.444 [2024-10-30 14:15:42.569971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.444 [2024-10-30 14:15:42.570120] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.444 [2024-10-30 14:15:42.570126] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.444 [2024-10-30 14:15:42.570131] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.444 [2024-10-30 14:15:42.572531] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.444 [2024-10-30 14:15:42.581975] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.444 [2024-10-30 14:15:42.582536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.444 [2024-10-30 14:15:42.582566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.444 [2024-10-30 14:15:42.582574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.444 [2024-10-30 14:15:42.582739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.444 [2024-10-30 14:15:42.582898] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.444 [2024-10-30 14:15:42.582905] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.444 [2024-10-30 14:15:42.582910] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.445 [2024-10-30 14:15:42.585308] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.445 [2024-10-30 14:15:42.594613] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.445 [2024-10-30 14:15:42.595180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.445 [2024-10-30 14:15:42.595210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.445 [2024-10-30 14:15:42.595219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.445 [2024-10-30 14:15:42.595385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.445 [2024-10-30 14:15:42.595536] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.445 [2024-10-30 14:15:42.595542] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.445 [2024-10-30 14:15:42.595547] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.445 [2024-10-30 14:15:42.597952] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.445 [2024-10-30 14:15:42.607259] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.445 [2024-10-30 14:15:42.607842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.445 [2024-10-30 14:15:42.607872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.445 [2024-10-30 14:15:42.607880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.445 [2024-10-30 14:15:42.608047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.445 [2024-10-30 14:15:42.608198] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.445 [2024-10-30 14:15:42.608205] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.445 [2024-10-30 14:15:42.608210] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.445 [2024-10-30 14:15:42.610614] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.445 [2024-10-30 14:15:42.619934] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.445 [2024-10-30 14:15:42.620426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.445 [2024-10-30 14:15:42.620459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.445 [2024-10-30 14:15:42.620468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.445 [2024-10-30 14:15:42.620634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.445 [2024-10-30 14:15:42.620793] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.445 [2024-10-30 14:15:42.620800] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.445 [2024-10-30 14:15:42.620805] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.445 [2024-10-30 14:15:42.623202] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.445 [2024-10-30 14:15:42.632506] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.445 [2024-10-30 14:15:42.633080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.445 [2024-10-30 14:15:42.633110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.445 [2024-10-30 14:15:42.633119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.445 [2024-10-30 14:15:42.633286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.445 [2024-10-30 14:15:42.633437] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.445 [2024-10-30 14:15:42.633443] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.445 [2024-10-30 14:15:42.633448] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.445 [2024-10-30 14:15:42.635856] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.445 [2024-10-30 14:15:42.645159] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.445 [2024-10-30 14:15:42.645736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.445 [2024-10-30 14:15:42.645771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.445 [2024-10-30 14:15:42.645779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.445 [2024-10-30 14:15:42.645944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.445 [2024-10-30 14:15:42.646095] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.445 [2024-10-30 14:15:42.646101] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.445 [2024-10-30 14:15:42.646107] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.445 [2024-10-30 14:15:42.648507] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.445 [2024-10-30 14:15:42.657817] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.445 [2024-10-30 14:15:42.658418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.445 [2024-10-30 14:15:42.658448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.445 [2024-10-30 14:15:42.658457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.445 [2024-10-30 14:15:42.658625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.445 [2024-10-30 14:15:42.658784] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.445 [2024-10-30 14:15:42.658791] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.445 [2024-10-30 14:15:42.658796] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.445 [2024-10-30 14:15:42.661192] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.445 [2024-10-30 14:15:42.670503] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.445 [2024-10-30 14:15:42.670963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.445 [2024-10-30 14:15:42.670992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.445 [2024-10-30 14:15:42.671000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.445 [2024-10-30 14:15:42.671165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.445 [2024-10-30 14:15:42.671316] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.445 [2024-10-30 14:15:42.671322] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.445 [2024-10-30 14:15:42.671327] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.445 [2024-10-30 14:15:42.673736] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.445 [2024-10-30 14:15:42.683190] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.445 [2024-10-30 14:15:42.683762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.445 [2024-10-30 14:15:42.683792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.445 [2024-10-30 14:15:42.683800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.445 [2024-10-30 14:15:42.683967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.445 [2024-10-30 14:15:42.684118] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.445 [2024-10-30 14:15:42.684124] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.445 [2024-10-30 14:15:42.684129] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.445 [2024-10-30 14:15:42.686530] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.445 [2024-10-30 14:15:42.695835] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.445 [2024-10-30 14:15:42.696376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.445 [2024-10-30 14:15:42.696406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.445 [2024-10-30 14:15:42.696414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.445 [2024-10-30 14:15:42.696578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.445 [2024-10-30 14:15:42.696730] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.445 [2024-10-30 14:15:42.696740] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.445 [2024-10-30 14:15:42.696754] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.445 [2024-10-30 14:15:42.699149] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.445 [2024-10-30 14:15:42.708454] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.445 [2024-10-30 14:15:42.709026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.445 [2024-10-30 14:15:42.709055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.445 [2024-10-30 14:15:42.709064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.445 [2024-10-30 14:15:42.709228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.445 [2024-10-30 14:15:42.709379] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.445 [2024-10-30 14:15:42.709385] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.445 [2024-10-30 14:15:42.709391] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.445 [2024-10-30 14:15:42.711796] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.445 [2024-10-30 14:15:42.721109] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.445 [2024-10-30 14:15:42.721683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.446 [2024-10-30 14:15:42.721713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.446 [2024-10-30 14:15:42.721722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.446 [2024-10-30 14:15:42.721893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.446 [2024-10-30 14:15:42.722045] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.446 [2024-10-30 14:15:42.722051] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.446 [2024-10-30 14:15:42.722056] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.446 [2024-10-30 14:15:42.724454] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.446 [2024-10-30 14:15:42.733783] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.446 [2024-10-30 14:15:42.734361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.446 [2024-10-30 14:15:42.734391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.446 [2024-10-30 14:15:42.734400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.446 [2024-10-30 14:15:42.734564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.446 [2024-10-30 14:15:42.734715] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.446 [2024-10-30 14:15:42.734722] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.446 [2024-10-30 14:15:42.734727] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.446 [2024-10-30 14:15:42.737136] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.708 [2024-10-30 14:15:42.746453] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.708 [2024-10-30 14:15:42.747071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.708 [2024-10-30 14:15:42.747101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.708 [2024-10-30 14:15:42.747109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.708 [2024-10-30 14:15:42.747273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.708 [2024-10-30 14:15:42.747425] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.708 [2024-10-30 14:15:42.747430] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.708 [2024-10-30 14:15:42.747436] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.708 [2024-10-30 14:15:42.749840] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.708 [2024-10-30 14:15:42.759153] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.708 [2024-10-30 14:15:42.759729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.708 [2024-10-30 14:15:42.759766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.708 [2024-10-30 14:15:42.759774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.708 [2024-10-30 14:15:42.759939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.708 [2024-10-30 14:15:42.760090] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.708 [2024-10-30 14:15:42.760097] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.708 [2024-10-30 14:15:42.760102] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.708 [2024-10-30 14:15:42.762509] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.708 [2024-10-30 14:15:42.771825] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.708 [2024-10-30 14:15:42.772414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.708 [2024-10-30 14:15:42.772444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.708 [2024-10-30 14:15:42.772452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.708 [2024-10-30 14:15:42.772617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.708 [2024-10-30 14:15:42.772776] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.708 [2024-10-30 14:15:42.772783] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.708 [2024-10-30 14:15:42.772789] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.708 [2024-10-30 14:15:42.775190] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.708 [2024-10-30 14:15:42.784512] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.708 [2024-10-30 14:15:42.785080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.708 [2024-10-30 14:15:42.785113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.708 [2024-10-30 14:15:42.785122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.708 [2024-10-30 14:15:42.785286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.708 [2024-10-30 14:15:42.785437] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.708 [2024-10-30 14:15:42.785443] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.708 [2024-10-30 14:15:42.785448] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.708 [2024-10-30 14:15:42.787850] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.708 [2024-10-30 14:15:42.797153] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.708 [2024-10-30 14:15:42.797656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.708 [2024-10-30 14:15:42.797671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.708 [2024-10-30 14:15:42.797677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.708 [2024-10-30 14:15:42.797831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.708 [2024-10-30 14:15:42.797980] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.708 [2024-10-30 14:15:42.797985] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.708 [2024-10-30 14:15:42.797990] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.708 [2024-10-30 14:15:42.800382] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.708 [2024-10-30 14:15:42.809827] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.708 [2024-10-30 14:15:42.810393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.708 [2024-10-30 14:15:42.810423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.708 [2024-10-30 14:15:42.810431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.708 [2024-10-30 14:15:42.810595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.708 [2024-10-30 14:15:42.810753] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.708 [2024-10-30 14:15:42.810760] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.708 [2024-10-30 14:15:42.810765] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.708 [2024-10-30 14:15:42.813170] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.708 [2024-10-30 14:15:42.822545] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.708 [2024-10-30 14:15:42.823105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.708 [2024-10-30 14:15:42.823134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.708 [2024-10-30 14:15:42.823143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.708 [2024-10-30 14:15:42.823311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.708 [2024-10-30 14:15:42.823462] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.708 [2024-10-30 14:15:42.823468] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.708 [2024-10-30 14:15:42.823474] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.708 [2024-10-30 14:15:42.825881] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.708 [2024-10-30 14:15:42.835186] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.708 [2024-10-30 14:15:42.835676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.708 [2024-10-30 14:15:42.835690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.708 [2024-10-30 14:15:42.835696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.708 [2024-10-30 14:15:42.835850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.708 [2024-10-30 14:15:42.835999] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.708 [2024-10-30 14:15:42.836005] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.708 [2024-10-30 14:15:42.836009] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.708 [2024-10-30 14:15:42.838403] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.708 [2024-10-30 14:15:42.847850] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.708 [2024-10-30 14:15:42.848415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.709 [2024-10-30 14:15:42.848445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.709 [2024-10-30 14:15:42.848453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.709 [2024-10-30 14:15:42.848618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.709 [2024-10-30 14:15:42.848777] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.709 [2024-10-30 14:15:42.848784] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.709 [2024-10-30 14:15:42.848789] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.709 [2024-10-30 14:15:42.851188] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.709 [2024-10-30 14:15:42.860507] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.709 [2024-10-30 14:15:42.861031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.709 [2024-10-30 14:15:42.861062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.709 [2024-10-30 14:15:42.861070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.709 [2024-10-30 14:15:42.861234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.709 [2024-10-30 14:15:42.861386] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.709 [2024-10-30 14:15:42.861395] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.709 [2024-10-30 14:15:42.861401] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.709 [2024-10-30 14:15:42.863817] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.709 [2024-10-30 14:15:42.873129] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.709 [2024-10-30 14:15:42.873713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.709 [2024-10-30 14:15:42.873743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.709 [2024-10-30 14:15:42.873759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.709 [2024-10-30 14:15:42.873926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.709 [2024-10-30 14:15:42.874077] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.709 [2024-10-30 14:15:42.874083] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.709 [2024-10-30 14:15:42.874089] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.709 [2024-10-30 14:15:42.876484] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.709 [2024-10-30 14:15:42.885789] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.709 [2024-10-30 14:15:42.886353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.709 [2024-10-30 14:15:42.886383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.709 [2024-10-30 14:15:42.886391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.709 [2024-10-30 14:15:42.886555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.709 [2024-10-30 14:15:42.886707] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.709 [2024-10-30 14:15:42.886713] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.709 [2024-10-30 14:15:42.886718] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.709 [2024-10-30 14:15:42.889123] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.709 [2024-10-30 14:15:42.898428] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.709 [2024-10-30 14:15:42.898982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.709 [2024-10-30 14:15:42.899012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.709 [2024-10-30 14:15:42.899021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.709 [2024-10-30 14:15:42.899185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.709 [2024-10-30 14:15:42.899336] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.709 [2024-10-30 14:15:42.899342] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.709 [2024-10-30 14:15:42.899347] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.709 [2024-10-30 14:15:42.901758] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.709 [2024-10-30 14:15:42.911064] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.709 [2024-10-30 14:15:42.911543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.709 [2024-10-30 14:15:42.911558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.709 [2024-10-30 14:15:42.911564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.709 [2024-10-30 14:15:42.911713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.709 [2024-10-30 14:15:42.911867] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.709 [2024-10-30 14:15:42.911873] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.709 [2024-10-30 14:15:42.911878] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.709 [2024-10-30 14:15:42.914277] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.709 [2024-10-30 14:15:42.923722] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.709 [2024-10-30 14:15:42.924289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.709 [2024-10-30 14:15:42.924319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.709 [2024-10-30 14:15:42.924327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.709 [2024-10-30 14:15:42.924492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.709 [2024-10-30 14:15:42.924643] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.709 [2024-10-30 14:15:42.924649] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.709 [2024-10-30 14:15:42.924654] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.709 [2024-10-30 14:15:42.927059] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.709 [2024-10-30 14:15:42.936365] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.709 [2024-10-30 14:15:42.936753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.709 [2024-10-30 14:15:42.936769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.709 [2024-10-30 14:15:42.936774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.709 [2024-10-30 14:15:42.936923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.709 [2024-10-30 14:15:42.937072] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.709 [2024-10-30 14:15:42.937077] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.709 [2024-10-30 14:15:42.937083] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.709 [2024-10-30 14:15:42.939475] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.709 [2024-10-30 14:15:42.949058] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.709 [2024-10-30 14:15:42.949593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.709 [2024-10-30 14:15:42.949626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.709 [2024-10-30 14:15:42.949634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.709 [2024-10-30 14:15:42.949805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.709 [2024-10-30 14:15:42.949957] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.709 [2024-10-30 14:15:42.949963] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.709 [2024-10-30 14:15:42.949969] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.709 [2024-10-30 14:15:42.952365] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.709 [2024-10-30 14:15:42.961671] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.709 [2024-10-30 14:15:42.962171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.709 [2024-10-30 14:15:42.962185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.709 [2024-10-30 14:15:42.962191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.709 [2024-10-30 14:15:42.962340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.709 [2024-10-30 14:15:42.962488] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.709 [2024-10-30 14:15:42.962494] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.709 [2024-10-30 14:15:42.962499] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.709 [2024-10-30 14:15:42.964900] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.709 [2024-10-30 14:15:42.974345] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.709 [2024-10-30 14:15:42.974791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.709 [2024-10-30 14:15:42.974821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.710 [2024-10-30 14:15:42.974829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.710 [2024-10-30 14:15:42.974994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.710 [2024-10-30 14:15:42.975145] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.710 [2024-10-30 14:15:42.975151] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.710 [2024-10-30 14:15:42.975157] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.710 [2024-10-30 14:15:42.977560] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.710 [2024-10-30 14:15:42.987012] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.710 [2024-10-30 14:15:42.987578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.710 [2024-10-30 14:15:42.987608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.710 [2024-10-30 14:15:42.987616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.710 [2024-10-30 14:15:42.987791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.710 [2024-10-30 14:15:42.987943] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.710 [2024-10-30 14:15:42.987950] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.710 [2024-10-30 14:15:42.987955] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.710 [2024-10-30 14:15:42.990351] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.710 [2024-10-30 14:15:42.999667] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.710 [2024-10-30 14:15:43.000214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.710 [2024-10-30 14:15:43.000243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.710 [2024-10-30 14:15:43.000252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.710 [2024-10-30 14:15:43.000416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.710 [2024-10-30 14:15:43.000567] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.710 [2024-10-30 14:15:43.000574] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.710 [2024-10-30 14:15:43.000579] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.710 [2024-10-30 14:15:43.002992] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.971 [2024-10-30 14:15:43.012319] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.971 [2024-10-30 14:15:43.012872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.971 [2024-10-30 14:15:43.012902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.971 [2024-10-30 14:15:43.012911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.971 [2024-10-30 14:15:43.013076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.971 [2024-10-30 14:15:43.013228] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.971 [2024-10-30 14:15:43.013234] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.971 [2024-10-30 14:15:43.013239] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.971 [2024-10-30 14:15:43.015649] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.971 [2024-10-30 14:15:43.024964] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.971 [2024-10-30 14:15:43.025402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.971 [2024-10-30 14:15:43.025417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.971 [2024-10-30 14:15:43.025423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.971 [2024-10-30 14:15:43.025572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.971 [2024-10-30 14:15:43.025720] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.971 [2024-10-30 14:15:43.025733] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.971 [2024-10-30 14:15:43.025738] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.971 [2024-10-30 14:15:43.028143] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.971 [2024-10-30 14:15:43.037606] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.971 [2024-10-30 14:15:43.038155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.971 [2024-10-30 14:15:43.038186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.971 [2024-10-30 14:15:43.038194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.971 [2024-10-30 14:15:43.038359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.971 [2024-10-30 14:15:43.038511] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.971 [2024-10-30 14:15:43.038516] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.971 [2024-10-30 14:15:43.038522] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.972 [2024-10-30 14:15:43.040930] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.972 [2024-10-30 14:15:43.050259] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.972 [2024-10-30 14:15:43.050755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.972 [2024-10-30 14:15:43.050770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.972 [2024-10-30 14:15:43.050776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.972 [2024-10-30 14:15:43.050925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.972 [2024-10-30 14:15:43.051073] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.972 [2024-10-30 14:15:43.051079] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.972 [2024-10-30 14:15:43.051084] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.972 [2024-10-30 14:15:43.053479] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.972 [2024-10-30 14:15:43.062943] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.972 [2024-10-30 14:15:43.063500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.972 [2024-10-30 14:15:43.063530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.972 [2024-10-30 14:15:43.063539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.972 [2024-10-30 14:15:43.063704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.972 [2024-10-30 14:15:43.063862] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.972 [2024-10-30 14:15:43.063869] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.972 [2024-10-30 14:15:43.063875] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.972 [2024-10-30 14:15:43.066280] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.972 [2024-10-30 14:15:43.075611] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.972 [2024-10-30 14:15:43.076085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.972 [2024-10-30 14:15:43.076100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.972 [2024-10-30 14:15:43.076105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.972 [2024-10-30 14:15:43.076254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.972 [2024-10-30 14:15:43.076402] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.972 [2024-10-30 14:15:43.076408] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.972 [2024-10-30 14:15:43.076413] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.972 [2024-10-30 14:15:43.078814] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.972 [2024-10-30 14:15:43.088278] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.972 [2024-10-30 14:15:43.088757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.972 [2024-10-30 14:15:43.088770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.972 [2024-10-30 14:15:43.088775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.972 [2024-10-30 14:15:43.088924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.972 [2024-10-30 14:15:43.089072] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.972 [2024-10-30 14:15:43.089077] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.972 [2024-10-30 14:15:43.089082] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.972 [2024-10-30 14:15:43.091552] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.972 [2024-10-30 14:15:43.100882] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.972 [2024-10-30 14:15:43.101464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.972 [2024-10-30 14:15:43.101493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.972 [2024-10-30 14:15:43.101502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.972 [2024-10-30 14:15:43.101666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.972 [2024-10-30 14:15:43.101826] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.972 [2024-10-30 14:15:43.101833] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.972 [2024-10-30 14:15:43.101838] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.972 [2024-10-30 14:15:43.104240] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.972 [2024-10-30 14:15:43.113559] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.972 [2024-10-30 14:15:43.114143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.972 [2024-10-30 14:15:43.114173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.972 [2024-10-30 14:15:43.114182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.972 [2024-10-30 14:15:43.114346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.972 [2024-10-30 14:15:43.114498] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.972 [2024-10-30 14:15:43.114504] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.972 [2024-10-30 14:15:43.114509] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.972 [2024-10-30 14:15:43.116926] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.972 [2024-10-30 14:15:43.126245] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.972 [2024-10-30 14:15:43.126808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.972 [2024-10-30 14:15:43.126838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.972 [2024-10-30 14:15:43.126847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.972 [2024-10-30 14:15:43.127012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.972 [2024-10-30 14:15:43.127163] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.972 [2024-10-30 14:15:43.127169] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.972 [2024-10-30 14:15:43.127175] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.972 [2024-10-30 14:15:43.129577] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.972 [2024-10-30 14:15:43.138893] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.972 [2024-10-30 14:15:43.139368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.972 [2024-10-30 14:15:43.139383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.972 [2024-10-30 14:15:43.139389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.972 [2024-10-30 14:15:43.139538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.972 [2024-10-30 14:15:43.139686] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.972 [2024-10-30 14:15:43.139692] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.972 [2024-10-30 14:15:43.139697] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.972 [2024-10-30 14:15:43.142096] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.972 [2024-10-30 14:15:43.151537] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.972 [2024-10-30 14:15:43.152125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.972 [2024-10-30 14:15:43.152155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.972 [2024-10-30 14:15:43.152164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.972 [2024-10-30 14:15:43.152332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.972 [2024-10-30 14:15:43.152483] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.972 [2024-10-30 14:15:43.152489] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.972 [2024-10-30 14:15:43.152494] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.972 [2024-10-30 14:15:43.154902] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.972 [2024-10-30 14:15:43.164222] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.972 [2024-10-30 14:15:43.164813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.972 [2024-10-30 14:15:43.164844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.972 [2024-10-30 14:15:43.164852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.972 [2024-10-30 14:15:43.165019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.972 [2024-10-30 14:15:43.165170] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.972 [2024-10-30 14:15:43.165176] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.973 [2024-10-30 14:15:43.165182] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.973 [2024-10-30 14:15:43.167585] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.973 [2024-10-30 14:15:43.176901] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.973 [2024-10-30 14:15:43.177464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.973 [2024-10-30 14:15:43.177494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.973 [2024-10-30 14:15:43.177502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.973 [2024-10-30 14:15:43.177667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.973 [2024-10-30 14:15:43.177826] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.973 [2024-10-30 14:15:43.177833] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.973 [2024-10-30 14:15:43.177839] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.973 [2024-10-30 14:15:43.180237] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.973 [2024-10-30 14:15:43.189549] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.973 [2024-10-30 14:15:43.190097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.973 [2024-10-30 14:15:43.190128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.973 [2024-10-30 14:15:43.190136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.973 [2024-10-30 14:15:43.190300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.973 [2024-10-30 14:15:43.190451] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.973 [2024-10-30 14:15:43.190462] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.973 [2024-10-30 14:15:43.190467] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.973 [2024-10-30 14:15:43.192875] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.973 [2024-10-30 14:15:43.202190] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.973 [2024-10-30 14:15:43.202765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.973 [2024-10-30 14:15:43.202795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.973 [2024-10-30 14:15:43.202803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.973 [2024-10-30 14:15:43.202970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.973 [2024-10-30 14:15:43.203121] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.973 [2024-10-30 14:15:43.203127] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.973 [2024-10-30 14:15:43.203132] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.973 [2024-10-30 14:15:43.205527] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.973 [2024-10-30 14:15:43.214852] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.973 [2024-10-30 14:15:43.215415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.973 [2024-10-30 14:15:43.215445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.973 [2024-10-30 14:15:43.215453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.973 [2024-10-30 14:15:43.215618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.973 [2024-10-30 14:15:43.215775] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.973 [2024-10-30 14:15:43.215782] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.973 [2024-10-30 14:15:43.215788] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.973 [2024-10-30 14:15:43.218187] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.973 [2024-10-30 14:15:43.227494] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.973 [2024-10-30 14:15:43.228049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.973 [2024-10-30 14:15:43.228080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.973 [2024-10-30 14:15:43.228088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.973 [2024-10-30 14:15:43.228252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.973 [2024-10-30 14:15:43.228404] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.973 [2024-10-30 14:15:43.228410] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.973 [2024-10-30 14:15:43.228415] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.973 [2024-10-30 14:15:43.230825] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.973 [2024-10-30 14:15:43.240138] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.973 [2024-10-30 14:15:43.240615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.973 [2024-10-30 14:15:43.240629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.973 [2024-10-30 14:15:43.240635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.973 [2024-10-30 14:15:43.240791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.973 [2024-10-30 14:15:43.240940] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.973 [2024-10-30 14:15:43.240946] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.973 [2024-10-30 14:15:43.240951] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.973 [2024-10-30 14:15:43.243344] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.973 [2024-10-30 14:15:43.252794] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.973 [2024-10-30 14:15:43.253353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.973 [2024-10-30 14:15:43.253383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.973 [2024-10-30 14:15:43.253391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.973 [2024-10-30 14:15:43.253555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.973 [2024-10-30 14:15:43.253706] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.973 [2024-10-30 14:15:43.253712] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.973 [2024-10-30 14:15:43.253717] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.973 5642.60 IOPS, 22.04 MiB/s [2024-10-30T13:15:43.272Z] [2024-10-30 14:15:43.257258] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.973 [2024-10-30 14:15:43.265452] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.973 [2024-10-30 14:15:43.266020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.973 [2024-10-30 14:15:43.266051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:44.973 [2024-10-30 14:15:43.266059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:44.973 [2024-10-30 14:15:43.266224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:44.973 [2024-10-30 14:15:43.266375] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.973 [2024-10-30 14:15:43.266381] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.973 [2024-10-30 14:15:43.266387] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.973 [2024-10-30 14:15:43.268795] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.235 [2024-10-30 14:15:43.278107] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.235 [2024-10-30 14:15:43.278751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.235 [2024-10-30 14:15:43.278781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.235 [2024-10-30 14:15:43.278789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.235 [2024-10-30 14:15:43.278954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.235 [2024-10-30 14:15:43.279105] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.235 [2024-10-30 14:15:43.279111] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.235 [2024-10-30 14:15:43.279117] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.235 [2024-10-30 14:15:43.281514] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.235 [2024-10-30 14:15:43.290682] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.235 [2024-10-30 14:15:43.291274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.235 [2024-10-30 14:15:43.291304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.235 [2024-10-30 14:15:43.291313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.235 [2024-10-30 14:15:43.291477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.235 [2024-10-30 14:15:43.291628] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.235 [2024-10-30 14:15:43.291634] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.235 [2024-10-30 14:15:43.291640] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.235 [2024-10-30 14:15:43.294042] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.235 [2024-10-30 14:15:43.303351] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.235 [2024-10-30 14:15:43.303928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.235 [2024-10-30 14:15:43.303958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.235 [2024-10-30 14:15:43.303966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.235 [2024-10-30 14:15:43.304131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.235 [2024-10-30 14:15:43.304282] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.235 [2024-10-30 14:15:43.304288] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.235 [2024-10-30 14:15:43.304294] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.235 [2024-10-30 14:15:43.306698] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.235 [2024-10-30 14:15:43.316012] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.235 [2024-10-30 14:15:43.316555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.235 [2024-10-30 14:15:43.316585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.235 [2024-10-30 14:15:43.316594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.235 [2024-10-30 14:15:43.316770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.235 [2024-10-30 14:15:43.316922] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.235 [2024-10-30 14:15:43.316928] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.235 [2024-10-30 14:15:43.316933] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.235 [2024-10-30 14:15:43.319330] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.235 [2024-10-30 14:15:43.328637] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.235 [2024-10-30 14:15:43.329192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.235 [2024-10-30 14:15:43.329222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.235 [2024-10-30 14:15:43.329231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.235 [2024-10-30 14:15:43.329396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.235 [2024-10-30 14:15:43.329547] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.235 [2024-10-30 14:15:43.329553] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.235 [2024-10-30 14:15:43.329558] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.235 [2024-10-30 14:15:43.331966] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.236 [2024-10-30 14:15:43.341273] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.236 [2024-10-30 14:15:43.341850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.236 [2024-10-30 14:15:43.341881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.236 [2024-10-30 14:15:43.341889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.236 [2024-10-30 14:15:43.342056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.236 [2024-10-30 14:15:43.342207] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.236 [2024-10-30 14:15:43.342213] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.236 [2024-10-30 14:15:43.342219] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.236 [2024-10-30 14:15:43.344621] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.236 [2024-10-30 14:15:43.353933] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.236 [2024-10-30 14:15:43.354504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.236 [2024-10-30 14:15:43.354534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.236 [2024-10-30 14:15:43.354542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.236 [2024-10-30 14:15:43.354707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.236 [2024-10-30 14:15:43.354866] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.236 [2024-10-30 14:15:43.354877] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.236 [2024-10-30 14:15:43.354882] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.236 [2024-10-30 14:15:43.357280] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.236 [2024-10-30 14:15:43.366597] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.236 [2024-10-30 14:15:43.367172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.236 [2024-10-30 14:15:43.367201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.236 [2024-10-30 14:15:43.367210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.236 [2024-10-30 14:15:43.367374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.236 [2024-10-30 14:15:43.367526] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.236 [2024-10-30 14:15:43.367532] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.236 [2024-10-30 14:15:43.367537] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.236 [2024-10-30 14:15:43.369947] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.236 [2024-10-30 14:15:43.379257] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.236 [2024-10-30 14:15:43.379793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.236 [2024-10-30 14:15:43.379823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.236 [2024-10-30 14:15:43.379832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.236 [2024-10-30 14:15:43.379996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.236 [2024-10-30 14:15:43.380147] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.236 [2024-10-30 14:15:43.380153] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.236 [2024-10-30 14:15:43.380159] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.236 [2024-10-30 14:15:43.382564] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.236 [2024-10-30 14:15:43.391872] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.236 [2024-10-30 14:15:43.392357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.236 [2024-10-30 14:15:43.392372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.236 [2024-10-30 14:15:43.392378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.236 [2024-10-30 14:15:43.392526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.236 [2024-10-30 14:15:43.392675] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.236 [2024-10-30 14:15:43.392680] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.236 [2024-10-30 14:15:43.392685] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.236 [2024-10-30 14:15:43.395085] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.236 [2024-10-30 14:15:43.404519] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.236 [2024-10-30 14:15:43.405124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.236 [2024-10-30 14:15:43.405154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.236 [2024-10-30 14:15:43.405163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.236 [2024-10-30 14:15:43.405327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.236 [2024-10-30 14:15:43.405479] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.236 [2024-10-30 14:15:43.405486] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.236 [2024-10-30 14:15:43.405491] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.236 [2024-10-30 14:15:43.407897] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.236 [2024-10-30 14:15:43.417215] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.236 [2024-10-30 14:15:43.417783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.236 [2024-10-30 14:15:43.417813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.236 [2024-10-30 14:15:43.417821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.236 [2024-10-30 14:15:43.417988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.236 [2024-10-30 14:15:43.418139] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.236 [2024-10-30 14:15:43.418146] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.236 [2024-10-30 14:15:43.418151] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.236 [2024-10-30 14:15:43.420554] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.236 [2024-10-30 14:15:43.429863] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.236 [2024-10-30 14:15:43.430426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.236 [2024-10-30 14:15:43.430455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.236 [2024-10-30 14:15:43.430464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.236 [2024-10-30 14:15:43.430628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.236 [2024-10-30 14:15:43.430787] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.236 [2024-10-30 14:15:43.430794] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.236 [2024-10-30 14:15:43.430800] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.236 [2024-10-30 14:15:43.433198] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.236 [2024-10-30 14:15:43.442504] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.236 [2024-10-30 14:15:43.443069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.236 [2024-10-30 14:15:43.443099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.236 [2024-10-30 14:15:43.443108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.236 [2024-10-30 14:15:43.443272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.236 [2024-10-30 14:15:43.443424] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.236 [2024-10-30 14:15:43.443430] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.236 [2024-10-30 14:15:43.443435] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.236 [2024-10-30 14:15:43.445841] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.236 [2024-10-30 14:15:43.455158] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.236 [2024-10-30 14:15:43.455636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.236 [2024-10-30 14:15:43.455651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.236 [2024-10-30 14:15:43.455657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.236 [2024-10-30 14:15:43.455809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.236 [2024-10-30 14:15:43.455959] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.236 [2024-10-30 14:15:43.455965] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.237 [2024-10-30 14:15:43.455969] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.237 [2024-10-30 14:15:43.458363] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.237 [2024-10-30 14:15:43.467824] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.237 [2024-10-30 14:15:43.468391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.237 [2024-10-30 14:15:43.468421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.237 [2024-10-30 14:15:43.468430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.237 [2024-10-30 14:15:43.468594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.237 [2024-10-30 14:15:43.468752] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.237 [2024-10-30 14:15:43.468759] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.237 [2024-10-30 14:15:43.468764] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.237 [2024-10-30 14:15:43.471160] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.237 [2024-10-30 14:15:43.480475] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.237 [2024-10-30 14:15:43.481041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.237 [2024-10-30 14:15:43.481070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.237 [2024-10-30 14:15:43.481079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.237 [2024-10-30 14:15:43.481247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.237 [2024-10-30 14:15:43.481398] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.237 [2024-10-30 14:15:43.481404] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.237 [2024-10-30 14:15:43.481410] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.237 [2024-10-30 14:15:43.483815] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.237 [2024-10-30 14:15:43.493133] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.237 [2024-10-30 14:15:43.493700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.237 [2024-10-30 14:15:43.493730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.237 [2024-10-30 14:15:43.493739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.237 [2024-10-30 14:15:43.493911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.237 [2024-10-30 14:15:43.494063] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.237 [2024-10-30 14:15:43.494069] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.237 [2024-10-30 14:15:43.494075] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.237 [2024-10-30 14:15:43.496475] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.237 [2024-10-30 14:15:43.505784] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.237 [2024-10-30 14:15:43.506353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.237 [2024-10-30 14:15:43.506382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.237 [2024-10-30 14:15:43.506392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.237 [2024-10-30 14:15:43.506556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.237 [2024-10-30 14:15:43.506707] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.237 [2024-10-30 14:15:43.506713] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.237 [2024-10-30 14:15:43.506719] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.237 [2024-10-30 14:15:43.509129] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.237 [2024-10-30 14:15:43.518453] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.237 [2024-10-30 14:15:43.519022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.237 [2024-10-30 14:15:43.519053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.237 [2024-10-30 14:15:43.519061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.237 [2024-10-30 14:15:43.519226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.237 [2024-10-30 14:15:43.519377] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.237 [2024-10-30 14:15:43.519386] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.237 [2024-10-30 14:15:43.519392] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.237 [2024-10-30 14:15:43.521799] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.237 [2024-10-30 14:15:43.531114] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.237 [2024-10-30 14:15:43.531547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.237 [2024-10-30 14:15:43.531561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.237 [2024-10-30 14:15:43.531567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.237 [2024-10-30 14:15:43.531716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.237 [2024-10-30 14:15:43.531870] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.237 [2024-10-30 14:15:43.531876] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.237 [2024-10-30 14:15:43.531881] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.501 [2024-10-30 14:15:43.534274] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.501 [2024-10-30 14:15:43.543717] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.501 [2024-10-30 14:15:43.544030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.501 [2024-10-30 14:15:43.544045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.501 [2024-10-30 14:15:43.544050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.501 [2024-10-30 14:15:43.544199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.501 [2024-10-30 14:15:43.544347] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.501 [2024-10-30 14:15:43.544353] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.501 [2024-10-30 14:15:43.544358] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.501 [2024-10-30 14:15:43.546755] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.501 [2024-10-30 14:15:43.556339] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.501 [2024-10-30 14:15:43.556869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.501 [2024-10-30 14:15:43.556900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.501 [2024-10-30 14:15:43.556908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.501 [2024-10-30 14:15:43.557075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.501 [2024-10-30 14:15:43.557226] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.501 [2024-10-30 14:15:43.557232] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.501 [2024-10-30 14:15:43.557237] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.501 [2024-10-30 14:15:43.559646] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.501 [2024-10-30 14:15:43.569007] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.501 [2024-10-30 14:15:43.569444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.501 [2024-10-30 14:15:43.569459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.501 [2024-10-30 14:15:43.569464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.501 [2024-10-30 14:15:43.569613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.501 [2024-10-30 14:15:43.569766] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.501 [2024-10-30 14:15:43.569772] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.501 [2024-10-30 14:15:43.569777] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.501 [2024-10-30 14:15:43.572175] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.501 [2024-10-30 14:15:43.581625] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.501 [2024-10-30 14:15:43.582202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.501 [2024-10-30 14:15:43.582232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.501 [2024-10-30 14:15:43.582240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.501 [2024-10-30 14:15:43.582405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.501 [2024-10-30 14:15:43.582556] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.501 [2024-10-30 14:15:43.582562] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.501 [2024-10-30 14:15:43.582567] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.501 [2024-10-30 14:15:43.584973] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.501 [2024-10-30 14:15:43.594289] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.501 [2024-10-30 14:15:43.594734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.501 [2024-10-30 14:15:43.594754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.501 [2024-10-30 14:15:43.594760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.501 [2024-10-30 14:15:43.594909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.501 [2024-10-30 14:15:43.595057] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.501 [2024-10-30 14:15:43.595062] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.501 [2024-10-30 14:15:43.595067] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.501 [2024-10-30 14:15:43.597467] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.501 [2024-10-30 14:15:43.606923] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.501 [2024-10-30 14:15:43.607479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.501 [2024-10-30 14:15:43.607509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.501 [2024-10-30 14:15:43.607517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.501 [2024-10-30 14:15:43.607682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.501 [2024-10-30 14:15:43.607838] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.501 [2024-10-30 14:15:43.607844] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.501 [2024-10-30 14:15:43.607850] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.501 [2024-10-30 14:15:43.610248] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.501 [2024-10-30 14:15:43.619573] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.501 [2024-10-30 14:15:43.620024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.501 [2024-10-30 14:15:43.620039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.501 [2024-10-30 14:15:43.620046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.501 [2024-10-30 14:15:43.620196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.501 [2024-10-30 14:15:43.620344] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.501 [2024-10-30 14:15:43.620350] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.501 [2024-10-30 14:15:43.620355] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.501 [2024-10-30 14:15:43.622754] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.501 [2024-10-30 14:15:43.632203] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.501 [2024-10-30 14:15:43.632637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.501 [2024-10-30 14:15:43.632649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.501 [2024-10-30 14:15:43.632655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.501 [2024-10-30 14:15:43.632808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.501 [2024-10-30 14:15:43.632969] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.501 [2024-10-30 14:15:43.632975] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.501 [2024-10-30 14:15:43.632980] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.501 [2024-10-30 14:15:43.635373] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.501 [2024-10-30 14:15:43.644817] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.501 [2024-10-30 14:15:43.645285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.501 [2024-10-30 14:15:43.645297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.501 [2024-10-30 14:15:43.645309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.501 [2024-10-30 14:15:43.645457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.501 [2024-10-30 14:15:43.645605] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.501 [2024-10-30 14:15:43.645611] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.501 [2024-10-30 14:15:43.645616] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.501 [2024-10-30 14:15:43.648016] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.501 [2024-10-30 14:15:43.657464] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.501 [2024-10-30 14:15:43.657814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.502 [2024-10-30 14:15:43.657826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.502 [2024-10-30 14:15:43.657831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.502 [2024-10-30 14:15:43.657980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.502 [2024-10-30 14:15:43.658128] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.502 [2024-10-30 14:15:43.658133] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.502 [2024-10-30 14:15:43.658138] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.502 [2024-10-30 14:15:43.660525] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.502 [2024-10-30 14:15:43.670113] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.502 [2024-10-30 14:15:43.670550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.502 [2024-10-30 14:15:43.670561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.502 [2024-10-30 14:15:43.670567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.502 [2024-10-30 14:15:43.670715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.502 [2024-10-30 14:15:43.670867] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.502 [2024-10-30 14:15:43.670873] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.502 [2024-10-30 14:15:43.670878] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.502 [2024-10-30 14:15:43.673267] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.502 [2024-10-30 14:15:43.682710] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.502 [2024-10-30 14:15:43.683230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.502 [2024-10-30 14:15:43.683260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.502 [2024-10-30 14:15:43.683269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.502 [2024-10-30 14:15:43.683433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.502 [2024-10-30 14:15:43.683584] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.502 [2024-10-30 14:15:43.683594] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.502 [2024-10-30 14:15:43.683599] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.502 [2024-10-30 14:15:43.686007] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.502 [2024-10-30 14:15:43.695320] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.502 [2024-10-30 14:15:43.695765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.502 [2024-10-30 14:15:43.695782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.502 [2024-10-30 14:15:43.695788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.502 [2024-10-30 14:15:43.695939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.502 [2024-10-30 14:15:43.696088] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.502 [2024-10-30 14:15:43.696094] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.502 [2024-10-30 14:15:43.696099] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.502 [2024-10-30 14:15:43.698490] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.502 [2024-10-30 14:15:43.707941] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.502 [2024-10-30 14:15:43.708503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.502 [2024-10-30 14:15:43.708533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.502 [2024-10-30 14:15:43.708542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.502 [2024-10-30 14:15:43.708708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.502 [2024-10-30 14:15:43.708865] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.502 [2024-10-30 14:15:43.708873] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.502 [2024-10-30 14:15:43.708878] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.502 [2024-10-30 14:15:43.711273] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.502 [2024-10-30 14:15:43.720586] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.502 [2024-10-30 14:15:43.721127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.502 [2024-10-30 14:15:43.721157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.502 [2024-10-30 14:15:43.721166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.502 [2024-10-30 14:15:43.721330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.502 [2024-10-30 14:15:43.721481] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.502 [2024-10-30 14:15:43.721487] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.502 [2024-10-30 14:15:43.721493] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.502 [2024-10-30 14:15:43.723903] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.502 [2024-10-30 14:15:43.733231] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.502 [2024-10-30 14:15:43.733692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.502 [2024-10-30 14:15:43.733707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.502 [2024-10-30 14:15:43.733713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.502 [2024-10-30 14:15:43.733866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.502 [2024-10-30 14:15:43.734015] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.502 [2024-10-30 14:15:43.734021] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.502 [2024-10-30 14:15:43.734026] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.502 [2024-10-30 14:15:43.736425] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.502 [2024-10-30 14:15:43.745869] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.502 [2024-10-30 14:15:43.746387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.502 [2024-10-30 14:15:43.746418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.502 [2024-10-30 14:15:43.746426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.502 [2024-10-30 14:15:43.746590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.502 [2024-10-30 14:15:43.746741] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.502 [2024-10-30 14:15:43.746754] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.502 [2024-10-30 14:15:43.746760] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.502 [2024-10-30 14:15:43.749156] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1202861 Killed "${NVMF_APP[@]}" "$@" 00:28:45.502 14:15:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:45.502 14:15:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:45.502 14:15:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:45.502 14:15:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:45.502 14:15:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:45.502 [2024-10-30 14:15:43.758478] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.502 [2024-10-30 14:15:43.759091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.502 [2024-10-30 14:15:43.759121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.502 [2024-10-30 14:15:43.759129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.502 [2024-10-30 14:15:43.759294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.503 [2024-10-30 14:15:43.759445] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.503 [2024-10-30 14:15:43.759455] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.503 [2024-10-30 14:15:43.759461] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.503 14:15:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1204409 00:28:45.503 14:15:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1204409 00:28:45.503 14:15:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:45.503 14:15:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1204409 ']' 00:28:45.503 14:15:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.503 [2024-10-30 14:15:43.761870] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.503 14:15:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:45.503 14:15:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.503 14:15:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:45.503 14:15:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:45.503 [2024-10-30 14:15:43.771060] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.503 [2024-10-30 14:15:43.771556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.503 [2024-10-30 14:15:43.771571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.503 [2024-10-30 14:15:43.771577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.503 [2024-10-30 14:15:43.771726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.503 [2024-10-30 14:15:43.771881] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.503 [2024-10-30 14:15:43.771888] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.503 [2024-10-30 14:15:43.771894] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.503 [2024-10-30 14:15:43.774286] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.503 [2024-10-30 14:15:43.783727] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.503 [2024-10-30 14:15:43.784190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.503 [2024-10-30 14:15:43.784203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.503 [2024-10-30 14:15:43.784208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.503 [2024-10-30 14:15:43.784356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.503 [2024-10-30 14:15:43.784504] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.503 [2024-10-30 14:15:43.784510] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.503 [2024-10-30 14:15:43.784516] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.503 [2024-10-30 14:15:43.786913] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.503 [2024-10-30 14:15:43.796385] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.503 [2024-10-30 14:15:43.796851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.503 [2024-10-30 14:15:43.796881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.503 [2024-10-30 14:15:43.796889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.503 [2024-10-30 14:15:43.797056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.503 [2024-10-30 14:15:43.797208] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.503 [2024-10-30 14:15:43.797214] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.503 [2024-10-30 14:15:43.797220] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.766 [2024-10-30 14:15:43.799622] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.766 [2024-10-30 14:15:43.809086] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.766 [2024-10-30 14:15:43.809592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.766 [2024-10-30 14:15:43.809607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.766 [2024-10-30 14:15:43.809613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.766 [2024-10-30 14:15:43.809766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.766 [2024-10-30 14:15:43.809915] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.766 [2024-10-30 14:15:43.809921] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.766 [2024-10-30 14:15:43.809926] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.766 [2024-10-30 14:15:43.812321] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.766 [2024-10-30 14:15:43.813750] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:28:45.766 [2024-10-30 14:15:43.813796] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.766 [2024-10-30 14:15:43.821790] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.766 [2024-10-30 14:15:43.822168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.766 [2024-10-30 14:15:43.822181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.766 [2024-10-30 14:15:43.822187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.766 [2024-10-30 14:15:43.822336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.766 [2024-10-30 14:15:43.822484] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.766 [2024-10-30 14:15:43.822490] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.766 [2024-10-30 14:15:43.822495] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.766 [2024-10-30 14:15:43.824901] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.766 [2024-10-30 14:15:43.834485] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.766 [2024-10-30 14:15:43.835043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.766 [2024-10-30 14:15:43.835073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.766 [2024-10-30 14:15:43.835082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.766 [2024-10-30 14:15:43.835247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.766 [2024-10-30 14:15:43.835398] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.766 [2024-10-30 14:15:43.835405] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.766 [2024-10-30 14:15:43.835410] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.766 [2024-10-30 14:15:43.837814] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.766 [2024-10-30 14:15:43.847136] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.766 [2024-10-30 14:15:43.847715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.766 [2024-10-30 14:15:43.847744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.766 [2024-10-30 14:15:43.847760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.766 [2024-10-30 14:15:43.847927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.766 [2024-10-30 14:15:43.848079] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.766 [2024-10-30 14:15:43.848086] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.766 [2024-10-30 14:15:43.848091] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.766 [2024-10-30 14:15:43.850495] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.766 [2024-10-30 14:15:43.859785] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.766 [2024-10-30 14:15:43.860394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.766 [2024-10-30 14:15:43.860424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.766 [2024-10-30 14:15:43.860433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.766 [2024-10-30 14:15:43.860598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.766 [2024-10-30 14:15:43.860757] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.766 [2024-10-30 14:15:43.860764] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.766 [2024-10-30 14:15:43.860769] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.766 [2024-10-30 14:15:43.863168] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.766 [2024-10-30 14:15:43.872356] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.766 [2024-10-30 14:15:43.873049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.766 [2024-10-30 14:15:43.873083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.766 [2024-10-30 14:15:43.873092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.766 [2024-10-30 14:15:43.873257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.766 [2024-10-30 14:15:43.873408] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.766 [2024-10-30 14:15:43.873414] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.766 [2024-10-30 14:15:43.873420] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.766 [2024-10-30 14:15:43.875828] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.766 [2024-10-30 14:15:43.885001] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.766 [2024-10-30 14:15:43.885571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.766 [2024-10-30 14:15:43.885601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.766 [2024-10-30 14:15:43.885610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.766 [2024-10-30 14:15:43.885781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.766 [2024-10-30 14:15:43.885934] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.766 [2024-10-30 14:15:43.885940] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.766 [2024-10-30 14:15:43.885945] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.766 [2024-10-30 14:15:43.888339] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.766 [2024-10-30 14:15:43.897656] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.766 [2024-10-30 14:15:43.898330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.766 [2024-10-30 14:15:43.898360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.766 [2024-10-30 14:15:43.898368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.766 [2024-10-30 14:15:43.898533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.766 [2024-10-30 14:15:43.898684] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.766 [2024-10-30 14:15:43.898690] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.766 [2024-10-30 14:15:43.898696] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.766 [2024-10-30 14:15:43.901099] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.766 [2024-10-30 14:15:43.905216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:45.766 [2024-10-30 14:15:43.910276] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.766 [2024-10-30 14:15:43.910859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.766 [2024-10-30 14:15:43.910890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.766 [2024-10-30 14:15:43.910903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.766 [2024-10-30 14:15:43.911071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.767 [2024-10-30 14:15:43.911223] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.767 [2024-10-30 14:15:43.911229] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.767 [2024-10-30 14:15:43.911235] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.767 [2024-10-30 14:15:43.913640] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.767 [2024-10-30 14:15:43.922972] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.767 [2024-10-30 14:15:43.923483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.767 [2024-10-30 14:15:43.923514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.767 [2024-10-30 14:15:43.923523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.767 [2024-10-30 14:15:43.923690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.767 [2024-10-30 14:15:43.923848] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.767 [2024-10-30 14:15:43.923855] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.767 [2024-10-30 14:15:43.923861] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.767 [2024-10-30 14:15:43.926261] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.767 [2024-10-30 14:15:43.934498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.767 [2024-10-30 14:15:43.934520] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.767 [2024-10-30 14:15:43.934527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.767 [2024-10-30 14:15:43.934532] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.767 [2024-10-30 14:15:43.934537] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.767 [2024-10-30 14:15:43.935578] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.767 [2024-10-30 14:15:43.935636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:45.767 [2024-10-30 14:15:43.935793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:45.767 [2024-10-30 14:15:43.936013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.767 [2024-10-30 14:15:43.936076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.767 [2024-10-30 14:15:43.936091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.767 [2024-10-30 14:15:43.936097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.767 [2024-10-30 14:15:43.936246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.767 [2024-10-30 14:15:43.936396] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.767 [2024-10-30 14:15:43.936402] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.767 [2024-10-30 14:15:43.936407] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.767 [2024-10-30 14:15:43.938813] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.767 [2024-10-30 14:15:43.948283] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.767 [2024-10-30 14:15:43.948862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.767 [2024-10-30 14:15:43.948893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.767 [2024-10-30 14:15:43.948902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.767 [2024-10-30 14:15:43.949070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.767 [2024-10-30 14:15:43.949222] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.767 [2024-10-30 14:15:43.949228] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.767 [2024-10-30 14:15:43.949234] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.767 [2024-10-30 14:15:43.951635] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.767 [2024-10-30 14:15:43.960965] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.767 [2024-10-30 14:15:43.961439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.767 [2024-10-30 14:15:43.961470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.767 [2024-10-30 14:15:43.961479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.767 [2024-10-30 14:15:43.961645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.767 [2024-10-30 14:15:43.961802] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.767 [2024-10-30 14:15:43.961809] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.767 [2024-10-30 14:15:43.961814] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.767 [2024-10-30 14:15:43.964216] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.767 [2024-10-30 14:15:43.973544] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.767 [2024-10-30 14:15:43.974086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.767 [2024-10-30 14:15:43.974116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.767 [2024-10-30 14:15:43.974125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.767 [2024-10-30 14:15:43.974292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.767 [2024-10-30 14:15:43.974443] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.767 [2024-10-30 14:15:43.974449] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.767 [2024-10-30 14:15:43.974455] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.767 [2024-10-30 14:15:43.976866] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.767 [2024-10-30 14:15:43.986182] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.767 [2024-10-30 14:15:43.986650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.767 [2024-10-30 14:15:43.986664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.767 [2024-10-30 14:15:43.986671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.767 [2024-10-30 14:15:43.986824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.767 [2024-10-30 14:15:43.986973] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.767 [2024-10-30 14:15:43.986979] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.767 [2024-10-30 14:15:43.986985] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.767 [2024-10-30 14:15:43.989381] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.767 [2024-10-30 14:15:43.998821] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.767 [2024-10-30 14:15:43.999229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.767 [2024-10-30 14:15:43.999241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.767 [2024-10-30 14:15:43.999247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.767 [2024-10-30 14:15:43.999396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.767 [2024-10-30 14:15:43.999544] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.767 [2024-10-30 14:15:43.999550] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.767 [2024-10-30 14:15:43.999555] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.767 [2024-10-30 14:15:44.001962] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.767 [2024-10-30 14:15:44.011423] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.767 [2024-10-30 14:15:44.011881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.767 [2024-10-30 14:15:44.011895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.767 [2024-10-30 14:15:44.011900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.767 [2024-10-30 14:15:44.012049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.767 [2024-10-30 14:15:44.012198] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.767 [2024-10-30 14:15:44.012204] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.767 [2024-10-30 14:15:44.012209] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.767 [2024-10-30 14:15:44.014604] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.767 [2024-10-30 14:15:44.024072] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.767 [2024-10-30 14:15:44.024619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.767 [2024-10-30 14:15:44.024649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.767 [2024-10-30 14:15:44.024658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.767 [2024-10-30 14:15:44.024832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.767 [2024-10-30 14:15:44.024985] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.767 [2024-10-30 14:15:44.024992] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.767 [2024-10-30 14:15:44.024997] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.767 [2024-10-30 14:15:44.027396] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.767 [2024-10-30 14:15:44.036712] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.767 [2024-10-30 14:15:44.037274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.768 [2024-10-30 14:15:44.037304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.768 [2024-10-30 14:15:44.037313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.768 [2024-10-30 14:15:44.037478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.768 [2024-10-30 14:15:44.037630] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.768 [2024-10-30 14:15:44.037636] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.768 [2024-10-30 14:15:44.037642] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.768 [2024-10-30 14:15:44.040048] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.768 [2024-10-30 14:15:44.049367] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.768 [2024-10-30 14:15:44.049847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.768 [2024-10-30 14:15:44.049862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.768 [2024-10-30 14:15:44.049868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.768 [2024-10-30 14:15:44.050017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.768 [2024-10-30 14:15:44.050165] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.768 [2024-10-30 14:15:44.050171] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.768 [2024-10-30 14:15:44.050176] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.768 [2024-10-30 14:15:44.052568] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.768 [2024-10-30 14:15:44.062027] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.768 [2024-10-30 14:15:44.062484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.768 [2024-10-30 14:15:44.062496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:45.768 [2024-10-30 14:15:44.062502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:45.768 [2024-10-30 14:15:44.062651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:45.768 [2024-10-30 14:15:44.062804] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.768 [2024-10-30 14:15:44.062814] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.768 [2024-10-30 14:15:44.062819] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.030 [2024-10-30 14:15:44.065214] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.030 [2024-10-30 14:15:44.074670] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.030 [2024-10-30 14:15:44.075108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-10-30 14:15:44.075121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.030 [2024-10-30 14:15:44.075126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.030 [2024-10-30 14:15:44.075274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.030 [2024-10-30 14:15:44.075422] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.030 [2024-10-30 14:15:44.075428] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.030 [2024-10-30 14:15:44.075433] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.030 [2024-10-30 14:15:44.077830] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.030 [2024-10-30 14:15:44.087289] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.030 [2024-10-30 14:15:44.087753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-10-30 14:15:44.087784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.030 [2024-10-30 14:15:44.087792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.030 [2024-10-30 14:15:44.087958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.030 [2024-10-30 14:15:44.088109] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.030 [2024-10-30 14:15:44.088116] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.030 [2024-10-30 14:15:44.088123] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.030 [2024-10-30 14:15:44.090524] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.030 [2024-10-30 14:15:44.099986] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.030 [2024-10-30 14:15:44.100449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-10-30 14:15:44.100464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.030 [2024-10-30 14:15:44.100470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.030 [2024-10-30 14:15:44.100618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.030 [2024-10-30 14:15:44.100772] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.030 [2024-10-30 14:15:44.100778] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.030 [2024-10-30 14:15:44.100783] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.030 [2024-10-30 14:15:44.103180] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.030 [2024-10-30 14:15:44.112627] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.030 [2024-10-30 14:15:44.113135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-10-30 14:15:44.113147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.030 [2024-10-30 14:15:44.113153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.030 [2024-10-30 14:15:44.113302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.030 [2024-10-30 14:15:44.113450] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.030 [2024-10-30 14:15:44.113456] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.030 [2024-10-30 14:15:44.113461] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.030 [2024-10-30 14:15:44.115969] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.030 [2024-10-30 14:15:44.125290] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.030 [2024-10-30 14:15:44.125879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-10-30 14:15:44.125909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.030 [2024-10-30 14:15:44.125918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.030 [2024-10-30 14:15:44.126082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.030 [2024-10-30 14:15:44.126233] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.030 [2024-10-30 14:15:44.126239] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.030 [2024-10-30 14:15:44.126245] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.030 [2024-10-30 14:15:44.128652] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.030 [2024-10-30 14:15:44.137975] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.030 [2024-10-30 14:15:44.138441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-10-30 14:15:44.138455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.030 [2024-10-30 14:15:44.138461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.030 [2024-10-30 14:15:44.138610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.030 [2024-10-30 14:15:44.138764] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.030 [2024-10-30 14:15:44.138771] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.030 [2024-10-30 14:15:44.138776] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.030 [2024-10-30 14:15:44.141172] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.030 [2024-10-30 14:15:44.150630] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.030 [2024-10-30 14:15:44.151184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-10-30 14:15:44.151214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.030 [2024-10-30 14:15:44.151222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.030 [2024-10-30 14:15:44.151387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.030 [2024-10-30 14:15:44.151539] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.030 [2024-10-30 14:15:44.151545] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.030 [2024-10-30 14:15:44.151551] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.030 [2024-10-30 14:15:44.153959] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.030 [2024-10-30 14:15:44.163281] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.030 [2024-10-30 14:15:44.163824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-10-30 14:15:44.163855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.030 [2024-10-30 14:15:44.163864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.030 [2024-10-30 14:15:44.164028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.030 [2024-10-30 14:15:44.164180] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.030 [2024-10-30 14:15:44.164185] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.030 [2024-10-30 14:15:44.164191] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.030 [2024-10-30 14:15:44.166602] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.030 [2024-10-30 14:15:44.175925] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.030 [2024-10-30 14:15:44.176290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-10-30 14:15:44.176305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.030 [2024-10-30 14:15:44.176310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.031 [2024-10-30 14:15:44.176459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.031 [2024-10-30 14:15:44.176608] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.031 [2024-10-30 14:15:44.176614] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.031 [2024-10-30 14:15:44.176619] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.031 [2024-10-30 14:15:44.179016] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.031 [2024-10-30 14:15:44.188615] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.031 [2024-10-30 14:15:44.189079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-10-30 14:15:44.189091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.031 [2024-10-30 14:15:44.189097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.031 [2024-10-30 14:15:44.189249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.031 [2024-10-30 14:15:44.189397] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.031 [2024-10-30 14:15:44.189403] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.031 [2024-10-30 14:15:44.189408] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.031 [2024-10-30 14:15:44.191805] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.031 [2024-10-30 14:15:44.201260] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.031 [2024-10-30 14:15:44.201809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-10-30 14:15:44.201839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.031 [2024-10-30 14:15:44.201848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.031 [2024-10-30 14:15:44.202015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.031 [2024-10-30 14:15:44.202166] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.031 [2024-10-30 14:15:44.202173] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.031 [2024-10-30 14:15:44.202179] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.031 [2024-10-30 14:15:44.204582] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.031 [2024-10-30 14:15:44.213908] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.031 [2024-10-30 14:15:44.214432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-10-30 14:15:44.214462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.031 [2024-10-30 14:15:44.214471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.031 [2024-10-30 14:15:44.214635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.031 [2024-10-30 14:15:44.214792] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.031 [2024-10-30 14:15:44.214799] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.031 [2024-10-30 14:15:44.214805] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.031 [2024-10-30 14:15:44.217212] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.031 [2024-10-30 14:15:44.226527] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.031 [2024-10-30 14:15:44.227056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-10-30 14:15:44.227086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.031 [2024-10-30 14:15:44.227095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.031 [2024-10-30 14:15:44.227259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.031 [2024-10-30 14:15:44.227411] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.031 [2024-10-30 14:15:44.227423] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.031 [2024-10-30 14:15:44.227428] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.031 [2024-10-30 14:15:44.229834] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.031 [2024-10-30 14:15:44.239145] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.031 [2024-10-30 14:15:44.239604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-10-30 14:15:44.239619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.031 [2024-10-30 14:15:44.239625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.031 [2024-10-30 14:15:44.239779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.031 [2024-10-30 14:15:44.239927] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.031 [2024-10-30 14:15:44.239933] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.031 [2024-10-30 14:15:44.239938] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.031 [2024-10-30 14:15:44.242326] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.031 [2024-10-30 14:15:44.251767] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.031 [2024-10-30 14:15:44.252221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-10-30 14:15:44.252233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.031 [2024-10-30 14:15:44.252238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.031 [2024-10-30 14:15:44.252386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.031 [2024-10-30 14:15:44.252534] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.031 [2024-10-30 14:15:44.252540] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.031 [2024-10-30 14:15:44.252545] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.031 [2024-10-30 14:15:44.254943] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.031 4702.17 IOPS, 18.37 MiB/s [2024-10-30T13:15:44.330Z] [2024-10-30 14:15:44.264391] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.031 [2024-10-30 14:15:44.264867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-10-30 14:15:44.264897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.031 [2024-10-30 14:15:44.264906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.031 [2024-10-30 14:15:44.265071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.031 [2024-10-30 14:15:44.265222] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.031 [2024-10-30 14:15:44.265228] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.031 [2024-10-30 14:15:44.265234] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.031 [2024-10-30 14:15:44.267652] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.031 [2024-10-30 14:15:44.276964] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.031 [2024-10-30 14:15:44.277281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-10-30 14:15:44.277296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.031 [2024-10-30 14:15:44.277302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.031 [2024-10-30 14:15:44.277451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.031 [2024-10-30 14:15:44.277599] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.031 [2024-10-30 14:15:44.277604] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.031 [2024-10-30 14:15:44.277610] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.031 [2024-10-30 14:15:44.280011] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.031 [2024-10-30 14:15:44.289604] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.031 [2024-10-30 14:15:44.290219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-10-30 14:15:44.290250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.031 [2024-10-30 14:15:44.290258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.031 [2024-10-30 14:15:44.290423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.031 [2024-10-30 14:15:44.290574] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.031 [2024-10-30 14:15:44.290581] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.031 [2024-10-30 14:15:44.290586] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.031 [2024-10-30 14:15:44.292990] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.031 [2024-10-30 14:15:44.302301] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.031 [2024-10-30 14:15:44.302772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-10-30 14:15:44.302789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.031 [2024-10-30 14:15:44.302794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.031 [2024-10-30 14:15:44.302943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.031 [2024-10-30 14:15:44.303092] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.031 [2024-10-30 14:15:44.303098] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.031 [2024-10-30 14:15:44.303103] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.031 [2024-10-30 14:15:44.305500] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.032 [2024-10-30 14:15:44.314952] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.032 [2024-10-30 14:15:44.315413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-10-30 14:15:44.315425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.032 [2024-10-30 14:15:44.315430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.032 [2024-10-30 14:15:44.315579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.032 [2024-10-30 14:15:44.315727] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.032 [2024-10-30 14:15:44.315732] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.032 [2024-10-30 14:15:44.315737] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.032 [2024-10-30 14:15:44.318141] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.032 [2024-10-30 14:15:44.327586] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.032 [2024-10-30 14:15:44.328053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-10-30 14:15:44.328083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.032 [2024-10-30 14:15:44.328092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.294 [2024-10-30 14:15:44.328257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.294 [2024-10-30 14:15:44.328409] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.294 [2024-10-30 14:15:44.328416] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.294 [2024-10-30 14:15:44.328421] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.294 [2024-10-30 14:15:44.330824] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.294 [2024-10-30 14:15:44.340267] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.294 [2024-10-30 14:15:44.340825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.294 [2024-10-30 14:15:44.340856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.294 [2024-10-30 14:15:44.340864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.294 [2024-10-30 14:15:44.341029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.294 [2024-10-30 14:15:44.341181] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.294 [2024-10-30 14:15:44.341187] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.294 [2024-10-30 14:15:44.341192] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.294 [2024-10-30 14:15:44.343595] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.294 [2024-10-30 14:15:44.352905] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.294 [2024-10-30 14:15:44.353469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.294 [2024-10-30 14:15:44.353499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.294 [2024-10-30 14:15:44.353511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.294 [2024-10-30 14:15:44.353675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.294 [2024-10-30 14:15:44.353833] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.294 [2024-10-30 14:15:44.353840] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.294 [2024-10-30 14:15:44.353845] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.294 [2024-10-30 14:15:44.356241] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.294 [2024-10-30 14:15:44.365549] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.294 [2024-10-30 14:15:44.366014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.294 [2024-10-30 14:15:44.366044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.294 [2024-10-30 14:15:44.366052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.294 [2024-10-30 14:15:44.366217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.294 [2024-10-30 14:15:44.366369] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.294 [2024-10-30 14:15:44.366375] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.294 [2024-10-30 14:15:44.366380] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.294 [2024-10-30 14:15:44.368791] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.294 [2024-10-30 14:15:44.378249] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.294 [2024-10-30 14:15:44.378807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.294 [2024-10-30 14:15:44.378838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.294 [2024-10-30 14:15:44.378846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.294 [2024-10-30 14:15:44.379013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.294 [2024-10-30 14:15:44.379165] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.294 [2024-10-30 14:15:44.379171] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.294 [2024-10-30 14:15:44.379176] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.294 [2024-10-30 14:15:44.381580] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.294 [2024-10-30 14:15:44.390893] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.294 [2024-10-30 14:15:44.391443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.294 [2024-10-30 14:15:44.391473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.294 [2024-10-30 14:15:44.391481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.294 [2024-10-30 14:15:44.391646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.294 [2024-10-30 14:15:44.391803] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.294 [2024-10-30 14:15:44.391813] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.294 [2024-10-30 14:15:44.391819] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.294 [2024-10-30 14:15:44.394212] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.294 [2024-10-30 14:15:44.403515] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.294 [2024-10-30 14:15:44.403987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.294 [2024-10-30 14:15:44.404002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.294 [2024-10-30 14:15:44.404008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.294 [2024-10-30 14:15:44.404157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.294 [2024-10-30 14:15:44.404305] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.294 [2024-10-30 14:15:44.404311] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.294 [2024-10-30 14:15:44.404316] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.294 [2024-10-30 14:15:44.406705] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.294 [2024-10-30 14:15:44.416154] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.294 [2024-10-30 14:15:44.416594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.294 [2024-10-30 14:15:44.416606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.294 [2024-10-30 14:15:44.416611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.294 [2024-10-30 14:15:44.416769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.294 [2024-10-30 14:15:44.416918] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.294 [2024-10-30 14:15:44.416923] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.294 [2024-10-30 14:15:44.416928] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.294 [2024-10-30 14:15:44.419321] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.294 [2024-10-30 14:15:44.428774] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.294 [2024-10-30 14:15:44.429233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.294 [2024-10-30 14:15:44.429263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.294 [2024-10-30 14:15:44.429272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.294 [2024-10-30 14:15:44.429436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.294 [2024-10-30 14:15:44.429588] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.294 [2024-10-30 14:15:44.429594] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.294 [2024-10-30 14:15:44.429599] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.294 [2024-10-30 14:15:44.432010] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.294 [2024-10-30 14:15:44.441394] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.294 [2024-10-30 14:15:44.441824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.294 [2024-10-30 14:15:44.441854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.294 [2024-10-30 14:15:44.441863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.294 [2024-10-30 14:15:44.442030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.295 [2024-10-30 14:15:44.442181] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.295 [2024-10-30 14:15:44.442187] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.295 [2024-10-30 14:15:44.442193] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.295 [2024-10-30 14:15:44.444597] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.295 [2024-10-30 14:15:44.454054] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.295 [2024-10-30 14:15:44.454578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.295 [2024-10-30 14:15:44.454607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.295 [2024-10-30 14:15:44.454616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.295 [2024-10-30 14:15:44.454786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.295 [2024-10-30 14:15:44.454938] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.295 [2024-10-30 14:15:44.454944] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.295 [2024-10-30 14:15:44.454950] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.295 [2024-10-30 14:15:44.457348] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.295 [2024-10-30 14:15:44.466659] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.295 [2024-10-30 14:15:44.467204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.295 [2024-10-30 14:15:44.467235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.295 [2024-10-30 14:15:44.467243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.295 [2024-10-30 14:15:44.467408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.295 [2024-10-30 14:15:44.467567] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.295 [2024-10-30 14:15:44.467574] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.295 [2024-10-30 14:15:44.467579] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.295 [2024-10-30 14:15:44.469979] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.295 [2024-10-30 14:15:44.479293] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.295 [2024-10-30 14:15:44.479697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.295 [2024-10-30 14:15:44.479728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.295 [2024-10-30 14:15:44.479737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.295 [2024-10-30 14:15:44.479907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.295 [2024-10-30 14:15:44.480059] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.295 [2024-10-30 14:15:44.480065] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.295 [2024-10-30 14:15:44.480070] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.295 [2024-10-30 14:15:44.482467] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.295 [2024-10-30 14:15:44.491924] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.295 [2024-10-30 14:15:44.492484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.295 [2024-10-30 14:15:44.492514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.295 [2024-10-30 14:15:44.492523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.295 [2024-10-30 14:15:44.492688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.295 [2024-10-30 14:15:44.492845] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.295 [2024-10-30 14:15:44.492852] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.295 [2024-10-30 14:15:44.492857] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.295 [2024-10-30 14:15:44.495256] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.295 [2024-10-30 14:15:44.504562] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.295 [2024-10-30 14:15:44.505186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.295 [2024-10-30 14:15:44.505215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.295 [2024-10-30 14:15:44.505224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.295 [2024-10-30 14:15:44.505388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.295 [2024-10-30 14:15:44.505540] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.295 [2024-10-30 14:15:44.505546] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.295 [2024-10-30 14:15:44.505552] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.295 [2024-10-30 14:15:44.507956] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.295 [2024-10-30 14:15:44.517271] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.295 [2024-10-30 14:15:44.517719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.295 [2024-10-30 14:15:44.517734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.295 [2024-10-30 14:15:44.517743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.295 [2024-10-30 14:15:44.517897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.295 [2024-10-30 14:15:44.518046] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.295 [2024-10-30 14:15:44.518051] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.295 [2024-10-30 14:15:44.518056] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.295 [2024-10-30 14:15:44.520450] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.295 [2024-10-30 14:15:44.529896] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.295 [2024-10-30 14:15:44.530308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.295 [2024-10-30 14:15:44.530338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.295 [2024-10-30 14:15:44.530347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.295 [2024-10-30 14:15:44.530512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.295 [2024-10-30 14:15:44.530664] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.295 [2024-10-30 14:15:44.530670] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.295 [2024-10-30 14:15:44.530675] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.295 [2024-10-30 14:15:44.533081] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.295 [2024-10-30 14:15:44.542534] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.295 [2024-10-30 14:15:44.543129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.295 [2024-10-30 14:15:44.543159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.295 [2024-10-30 14:15:44.543168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.295 [2024-10-30 14:15:44.543333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.295 [2024-10-30 14:15:44.543485] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.295 [2024-10-30 14:15:44.543491] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.295 [2024-10-30 14:15:44.543497] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.295 [2024-10-30 14:15:44.545899] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.295 [2024-10-30 14:15:44.555215] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.295 [2024-10-30 14:15:44.555662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.295 [2024-10-30 14:15:44.555692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.295 [2024-10-30 14:15:44.555701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.295 [2024-10-30 14:15:44.555875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.295 [2024-10-30 14:15:44.556032] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.295 [2024-10-30 14:15:44.556039] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.295 [2024-10-30 14:15:44.556044] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.295 [2024-10-30 14:15:44.558442] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.296 [2024-10-30 14:15:44.567815] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.296 [2024-10-30 14:15:44.568185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.296 [2024-10-30 14:15:44.568200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.296 [2024-10-30 14:15:44.568206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.296 [2024-10-30 14:15:44.568354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.296 [2024-10-30 14:15:44.568504] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.296 [2024-10-30 14:15:44.568510] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.296 [2024-10-30 14:15:44.568516] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.296 [2024-10-30 14:15:44.570914] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.296 [2024-10-30 14:15:44.580503] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.296 [2024-10-30 14:15:44.581052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.296 [2024-10-30 14:15:44.581082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.296 [2024-10-30 14:15:44.581091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.296 [2024-10-30 14:15:44.581255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.296 [2024-10-30 14:15:44.581407] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.296 [2024-10-30 14:15:44.581413] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.296 [2024-10-30 14:15:44.581418] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.296 [2024-10-30 14:15:44.583825] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.557 [2024-10-30 14:15:44.593139] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.557 [2024-10-30 14:15:44.593603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-10-30 14:15:44.593618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.557 [2024-10-30 14:15:44.593623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.557 [2024-10-30 14:15:44.593777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.557 [2024-10-30 14:15:44.593927] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.557 [2024-10-30 14:15:44.593932] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.557 [2024-10-30 14:15:44.593938] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.557 [2024-10-30 14:15:44.596332] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.557 [2024-10-30 14:15:44.605778] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.557 [2024-10-30 14:15:44.606328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-10-30 14:15:44.606358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.557 [2024-10-30 14:15:44.606367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.557 [2024-10-30 14:15:44.606532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.557 [2024-10-30 14:15:44.606683] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.557 [2024-10-30 14:15:44.606689] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.557 [2024-10-30 14:15:44.606695] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.557 [2024-10-30 14:15:44.609096] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.557 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:46.557 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:46.557 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:46.557 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:46.557 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:46.557 [2024-10-30 14:15:44.618406] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.557 [2024-10-30 14:15:44.618757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-10-30 14:15:44.618773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.557 [2024-10-30 14:15:44.618779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.557 [2024-10-30 14:15:44.618928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.557 [2024-10-30 14:15:44.619077] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.557 [2024-10-30 14:15:44.619085] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.557 [2024-10-30 14:15:44.619090] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.557 [2024-10-30 14:15:44.621486] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.557 [2024-10-30 14:15:44.631085] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.557 [2024-10-30 14:15:44.631467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-10-30 14:15:44.631497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.557 [2024-10-30 14:15:44.631506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.557 [2024-10-30 14:15:44.631670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.557 [2024-10-30 14:15:44.631829] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.557 [2024-10-30 14:15:44.631836] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.557 [2024-10-30 14:15:44.631846] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.557 [2024-10-30 14:15:44.634245] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.557 [2024-10-30 14:15:44.643700] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.557 [2024-10-30 14:15:44.644236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-10-30 14:15:44.644252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.557 [2024-10-30 14:15:44.644257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.557 [2024-10-30 14:15:44.644406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.557 [2024-10-30 14:15:44.644555] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.557 [2024-10-30 14:15:44.644561] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.557 [2024-10-30 14:15:44.644566] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.557 [2024-10-30 14:15:44.646965] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.557 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:46.557 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:46.557 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.557 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:46.557 [2024-10-30 14:15:44.656273] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.557 [2024-10-30 14:15:44.656688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-10-30 14:15:44.656718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.557 [2024-10-30 14:15:44.656727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.557 [2024-10-30 14:15:44.656899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.557 [2024-10-30 14:15:44.657051] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.557 [2024-10-30 14:15:44.657057] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.557 [2024-10-30 14:15:44.657062] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.557 [2024-10-30 14:15:44.659291] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:46.557 [2024-10-30 14:15:44.659460] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.557 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.557 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:46.557 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.557 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:46.557 [2024-10-30 14:15:44.668925] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.557 [2024-10-30 14:15:44.669506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-10-30 14:15:44.669540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.557 [2024-10-30 14:15:44.669549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.557 [2024-10-30 14:15:44.669713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.557 [2024-10-30 14:15:44.669870] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.557 [2024-10-30 14:15:44.669877] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.557 [2024-10-30 14:15:44.669883] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.557 [2024-10-30 14:15:44.672280] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.557 [2024-10-30 14:15:44.681592] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.557 [2024-10-30 14:15:44.682145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-10-30 14:15:44.682175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.557 [2024-10-30 14:15:44.682184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.557 [2024-10-30 14:15:44.682349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.557 [2024-10-30 14:15:44.682500] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.557 [2024-10-30 14:15:44.682506] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.557 [2024-10-30 14:15:44.682511] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.557 [2024-10-30 14:15:44.684916] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.557 Malloc0 00:28:46.557 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.557 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:46.557 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.557 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:46.558 [2024-10-30 14:15:44.694226] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.558 [2024-10-30 14:15:44.694876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-10-30 14:15:44.694906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.558 [2024-10-30 14:15:44.694915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.558 [2024-10-30 14:15:44.695080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.558 [2024-10-30 14:15:44.695231] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.558 [2024-10-30 14:15:44.695237] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.558 [2024-10-30 14:15:44.695243] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.558 [2024-10-30 14:15:44.697647] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.558 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.558 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:46.558 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.558 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:46.558 [2024-10-30 14:15:44.706818] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.558 [2024-10-30 14:15:44.707406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-10-30 14:15:44.707436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.558 [2024-10-30 14:15:44.707445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.558 [2024-10-30 14:15:44.707610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.558 [2024-10-30 14:15:44.707767] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.558 [2024-10-30 14:15:44.707774] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.558 [2024-10-30 14:15:44.707779] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.558 [2024-10-30 14:15:44.710171] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.558 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.558 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:46.558 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.558 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:46.558 [2024-10-30 14:15:44.719491] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.558 [2024-10-30 14:15:44.719893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-10-30 14:15:44.719922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d7200 with addr=10.0.0.2, port=4420 00:28:46.558 [2024-10-30 14:15:44.719930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7200 is same with the state(6) to be set 00:28:46.558 [2024-10-30 14:15:44.720096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d7200 (9): Bad file descriptor 00:28:46.558 [2024-10-30 14:15:44.720247] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.558 [2024-10-30 14:15:44.720253] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.558 [2024-10-30 14:15:44.720258] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.558 [2024-10-30 14:15:44.721784] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.558 [2024-10-30 14:15:44.722661] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.558 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.558 14:15:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1203283 00:28:46.558 [2024-10-30 14:15:44.732146] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.558 [2024-10-30 14:15:44.765407] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:28:48.069 4939.43 IOPS, 19.29 MiB/s [2024-10-30T13:15:47.310Z] 5939.62 IOPS, 23.20 MiB/s [2024-10-30T13:15:48.696Z] 6700.11 IOPS, 26.17 MiB/s [2024-10-30T13:15:49.269Z] 7317.40 IOPS, 28.58 MiB/s [2024-10-30T13:15:50.652Z] 7827.82 IOPS, 30.58 MiB/s [2024-10-30T13:15:51.595Z] 8247.42 IOPS, 32.22 MiB/s [2024-10-30T13:15:52.538Z] 8603.62 IOPS, 33.61 MiB/s [2024-10-30T13:15:53.481Z] 8902.86 IOPS, 34.78 MiB/s 00:28:55.182 Latency(us) 00:28:55.182 [2024-10-30T13:15:53.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.182 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:55.182 Verification LBA range: start 0x0 length 0x4000 00:28:55.182 Nvme1n1 : 15.00 9188.07 35.89 13530.22 0.00 5615.66 720.21 13926.40 00:28:55.182 [2024-10-30T13:15:53.481Z] =================================================================================================================== 00:28:55.182 [2024-10-30T13:15:53.481Z] Total : 9188.07 35.89 13530.22 0.00 5615.66 720.21 13926.40 00:28:55.182 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:55.182 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:55.182 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.182 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:55.182 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.182 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:55.183 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:55.183 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:55.183 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:55.183 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:55.183 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:55.183 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:55.183 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:55.183 rmmod nvme_tcp 00:28:55.183 rmmod nvme_fabrics 00:28:55.183 rmmod nvme_keyring 00:28:55.183 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:55.183 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:55.183 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:55.183 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1204409 ']' 00:28:55.183 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1204409 00:28:55.183 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1204409 ']' 00:28:55.183 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1204409 00:28:55.183 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:28:55.183 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:55.183 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1204409 00:28:55.445 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:55.445 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:55.445 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1204409' 00:28:55.445 killing process with pid 1204409 00:28:55.445 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1204409 00:28:55.445 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1204409 00:28:55.445 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:55.445 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:55.445 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:55.445 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:55.445 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:28:55.445 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:55.445 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:28:55.445 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:55.445 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:55.445 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.445 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:55.445 14:15:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:57.996 14:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:57.996 00:28:57.996 real 0m28.311s 00:28:57.996 user 1m3.505s 00:28:57.996 sys 0m7.644s 00:28:57.996 14:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:57.996 14:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.996 ************************************ 00:28:57.996 END TEST nvmf_bdevperf 00:28:57.996 ************************************ 00:28:57.996 14:15:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:57.996 14:15:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:57.996 14:15:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:57.996 14:15:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.996 ************************************ 00:28:57.996 START TEST nvmf_target_disconnect 00:28:57.996 ************************************ 00:28:57.996 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:57.996 * Looking for test storage... 00:28:57.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:57.996 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:57.996 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:28:57.996 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:57.996 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:57.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:57.997 --rc genhtml_branch_coverage=1 00:28:57.997 --rc genhtml_function_coverage=1 00:28:57.997 --rc genhtml_legend=1 00:28:57.997 --rc geninfo_all_blocks=1 00:28:57.997 --rc geninfo_unexecuted_blocks=1 00:28:57.997 00:28:57.997 ' 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:57.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:57.997 --rc genhtml_branch_coverage=1 00:28:57.997 --rc genhtml_function_coverage=1 00:28:57.997 --rc genhtml_legend=1 00:28:57.997 --rc geninfo_all_blocks=1 00:28:57.997 --rc geninfo_unexecuted_blocks=1 00:28:57.997 00:28:57.997 ' 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:57.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:57.997 --rc genhtml_branch_coverage=1 00:28:57.997 --rc genhtml_function_coverage=1 00:28:57.997 --rc genhtml_legend=1 00:28:57.997 --rc geninfo_all_blocks=1 00:28:57.997 --rc geninfo_unexecuted_blocks=1 00:28:57.997 00:28:57.997 ' 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:57.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:57.997 --rc genhtml_branch_coverage=1 00:28:57.997 --rc genhtml_function_coverage=1 00:28:57.997 --rc genhtml_legend=1 00:28:57.997 --rc geninfo_all_blocks=1 00:28:57.997 --rc geninfo_unexecuted_blocks=1 00:28:57.997 00:28:57.997 ' 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:57.997 14:15:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:57.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.997 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:57.998 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:57.998 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:57.998 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:57.998 14:15:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:06.140 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:06.140 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:06.140 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:06.140 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:06.140 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:06.141 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:06.141 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:06.141 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:06.141 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:06.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:06.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:29:06.141 00:29:06.141 --- 10.0.0.2 ping statistics --- 00:29:06.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.141 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:06.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:06.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:29:06.141 00:29:06.141 --- 10.0.0.1 ping statistics --- 00:29:06.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.141 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:06.141 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:06.141 ************************************ 00:29:06.142 START TEST nvmf_target_disconnect_tc1 00:29:06.142 ************************************ 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:06.142 [2024-10-30 14:16:03.676707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.142 [2024-10-30 14:16:03.676784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6bc00 with addr=10.0.0.2, port=4420 00:29:06.142 [2024-10-30 14:16:03.676816] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:06.142 [2024-10-30 14:16:03.676833] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:06.142 [2024-10-30 14:16:03.676841] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:06.142 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:06.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:06.142 Initializing NVMe Controllers 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:06.142 00:29:06.142 real 0m0.142s 00:29:06.142 user 0m0.056s 00:29:06.142 sys 0m0.084s 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:06.142 ************************************ 00:29:06.142 END TEST nvmf_target_disconnect_tc1 00:29:06.142 ************************************ 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:06.142 ************************************ 00:29:06.142 START TEST nvmf_target_disconnect_tc2 00:29:06.142 ************************************ 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1210547 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1210547 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1210547 ']' 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.142 14:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.142 [2024-10-30 14:16:03.846580] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:29:06.142 [2024-10-30 14:16:03.846642] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.142 [2024-10-30 14:16:03.947157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:06.142 [2024-10-30 14:16:03.999578] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:06.142 [2024-10-30 14:16:03.999633] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:06.142 [2024-10-30 14:16:03.999642] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:06.142 [2024-10-30 14:16:03.999649] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:06.142 [2024-10-30 14:16:03.999658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:06.142 [2024-10-30 14:16:04.002024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:06.142 [2024-10-30 14:16:04.002182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:06.142 [2024-10-30 14:16:04.002348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:06.142 [2024-10-30 14:16:04.002368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:06.404 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:06.404 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:06.404 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:06.404 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:06.404 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.665 Malloc0 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.665 [2024-10-30 14:16:04.750289] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.665 [2024-10-30 14:16:04.790656] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1210698 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:06.665 14:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:08.586 14:16:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1210547 00:29:08.586 14:16:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:08.586 Read completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Read completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Read completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Write completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Write completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Write completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Write completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Write completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Read completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Write completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Read completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Read completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Read completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Read completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Write completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Read completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Read completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Write completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Read completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Read completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Read completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Write completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Read completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Write completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Write completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Write completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Write completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Write completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Read completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Read completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Write completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 Read completed with error (sct=0, sc=8) 00:29:08.586 starting I/O failed 00:29:08.586 [2024-10-30 14:16:06.829625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.586 [2024-10-30 14:16:06.830120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.586 [2024-10-30 14:16:06.830190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.586 qpair failed and we were unable to recover it. 00:29:08.586 [2024-10-30 14:16:06.830466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.586 [2024-10-30 14:16:06.830480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.586 qpair failed and we were unable to recover it. 00:29:08.586 [2024-10-30 14:16:06.830996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.586 [2024-10-30 14:16:06.831057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.586 qpair failed and we were unable to recover it. 00:29:08.586 [2024-10-30 14:16:06.831474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.586 [2024-10-30 14:16:06.831490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.586 qpair failed and we were unable to recover it. 00:29:08.586 [2024-10-30 14:16:06.831967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.586 [2024-10-30 14:16:06.832030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.586 qpair failed and we were unable to recover it. 00:29:08.586 [2024-10-30 14:16:06.832207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.586 [2024-10-30 14:16:06.832221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.586 qpair failed and we were unable to recover it. 00:29:08.586 [2024-10-30 14:16:06.832413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.586 [2024-10-30 14:16:06.832423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.586 qpair failed and we were unable to recover it. 00:29:08.586 [2024-10-30 14:16:06.832725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.586 [2024-10-30 14:16:06.832736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.586 qpair failed and we were unable to recover it. 00:29:08.586 [2024-10-30 14:16:06.833019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.586 [2024-10-30 14:16:06.833032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.586 qpair failed and we were unable to recover it. 00:29:08.586 [2024-10-30 14:16:06.833282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.586 [2024-10-30 14:16:06.833292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.586 qpair failed and we were unable to recover it. 00:29:08.586 [2024-10-30 14:16:06.833653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.586 [2024-10-30 14:16:06.833665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.586 qpair failed and we were unable to recover it. 00:29:08.586 [2024-10-30 14:16:06.833996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.586 [2024-10-30 14:16:06.834007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.586 qpair failed and we were unable to recover it. 00:29:08.586 [2024-10-30 14:16:06.834218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.586 [2024-10-30 14:16:06.834228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.586 qpair failed and we were unable to recover it. 00:29:08.586 [2024-10-30 14:16:06.834538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.586 [2024-10-30 14:16:06.834549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.586 qpair failed and we were unable to recover it. 00:29:08.586 [2024-10-30 14:16:06.834793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.586 [2024-10-30 14:16:06.834805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.586 qpair failed and we were unable to recover it. 00:29:08.586 [2024-10-30 14:16:06.835156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.586 [2024-10-30 14:16:06.835168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.586 qpair failed and we were unable to recover it. 00:29:08.586 [2024-10-30 14:16:06.835524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.586 [2024-10-30 14:16:06.835536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.586 qpair failed and we were unable to recover it. 00:29:08.586 [2024-10-30 14:16:06.835726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.586 [2024-10-30 14:16:06.835738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.586 qpair failed and we were unable to recover it. 00:29:08.586 [2024-10-30 14:16:06.836157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.586 [2024-10-30 14:16:06.836169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.586 qpair failed and we were unable to recover it. 00:29:08.586 [2024-10-30 14:16:06.836524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.586 [2024-10-30 14:16:06.836541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.586 qpair failed and we were unable to recover it. 00:29:08.586 [2024-10-30 14:16:06.836732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.586 [2024-10-30 14:16:06.836742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.586 qpair failed and we were unable to recover it. 00:29:08.586 [2024-10-30 14:16:06.837113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.586 [2024-10-30 14:16:06.837125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.586 qpair failed and we were unable to recover it. 00:29:08.586 [2024-10-30 14:16:06.837432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.586 [2024-10-30 14:16:06.837443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.837773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.837786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.838155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.838167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.838527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.838539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.838773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.838785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.839006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.839018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.839210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.839222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.839443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.839454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.839758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.839770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.840018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.840030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.840222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.840233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.840536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.840548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.840886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.840898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.841258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.841269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.841611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.841622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.841931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.841942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.842293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.842305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.842664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.842675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.842903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.842915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.843260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.843272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.843621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.843632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.844002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.844014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.844369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.844380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.844682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.844694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.845021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.845031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.845424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.845434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.845774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.845784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.846020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.846031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.846232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.846243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.846601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.846611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.846941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.846952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.847259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.847269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.847566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.847577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.847876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.847887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.848201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.848211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.848554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.848565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.848866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.848877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.849201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.849212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.849603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.849617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.849942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.849953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.850270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.587 [2024-10-30 14:16:06.850281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.587 qpair failed and we were unable to recover it. 00:29:08.587 [2024-10-30 14:16:06.850586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.850597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.850909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.850920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.851219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.851230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.851526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.851537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.851921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.851933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.852234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.852245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.852562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.852573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.852803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.852815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.853138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.853150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.853453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.853463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.853781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.853793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.854093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.854104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.854490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.854502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.854814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.854825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.855188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.855199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.855551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.855562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.855895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.855908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.856232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.856244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.856558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.856569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.856888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.856900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.857199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.857210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.857518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.857529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.857855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.857867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.858171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.858182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.858500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.858515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.858871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.858884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.859188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.859199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.859499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.859511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.859919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.859932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.860248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.860259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.860571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.860583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.860902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.860913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.861239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.861251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.861568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.861579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.861880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.861896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.862213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.862229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.862547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.862564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.862982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.863000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.588 qpair failed and we were unable to recover it. 00:29:08.588 [2024-10-30 14:16:06.863303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.588 [2024-10-30 14:16:06.863320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.863660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.863676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.864053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.864068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.864368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.864383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.864714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.864729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.865098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.865114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.865425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.865441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.865762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.865779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.866109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.866124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.866446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.866461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.866782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.866797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.867166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.867182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.867497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.867512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.867809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.867825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.868198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.868213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.868424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.868439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.868770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.868787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.869116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.869131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.869458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.869472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.869733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.869760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.869960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.869974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.870253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.870268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.870618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.870633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.870975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.870990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.871304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.871319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.871633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.871649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.871986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.872001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.872341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.872368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.872665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.872681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.873006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.873023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.873348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.873362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.873709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.873729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.874101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.874121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.874448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.874469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.874695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.874716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.875085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.875105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.875480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.875501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.589 [2024-10-30 14:16:06.875865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.589 [2024-10-30 14:16:06.875886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.589 qpair failed and we were unable to recover it. 00:29:08.590 [2024-10-30 14:16:06.876216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.590 [2024-10-30 14:16:06.876235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.590 qpair failed and we were unable to recover it. 00:29:08.590 [2024-10-30 14:16:06.876561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.590 [2024-10-30 14:16:06.876580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.590 qpair failed and we were unable to recover it. 00:29:08.590 [2024-10-30 14:16:06.876931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.590 [2024-10-30 14:16:06.876953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.590 qpair failed and we were unable to recover it. 00:29:08.590 [2024-10-30 14:16:06.877274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.590 [2024-10-30 14:16:06.877293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.590 qpair failed and we were unable to recover it. 00:29:08.590 [2024-10-30 14:16:06.877620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.590 [2024-10-30 14:16:06.877640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.590 qpair failed and we were unable to recover it. 00:29:08.590 [2024-10-30 14:16:06.877973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.590 [2024-10-30 14:16:06.877994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.590 qpair failed and we were unable to recover it. 00:29:08.590 [2024-10-30 14:16:06.878214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.590 [2024-10-30 14:16:06.878233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.590 qpair failed and we were unable to recover it. 00:29:08.590 [2024-10-30 14:16:06.878570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.590 [2024-10-30 14:16:06.878589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.590 qpair failed and we were unable to recover it. 00:29:08.590 [2024-10-30 14:16:06.878914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.590 [2024-10-30 14:16:06.878935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.590 qpair failed and we were unable to recover it. 00:29:08.590 [2024-10-30 14:16:06.879274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.590 [2024-10-30 14:16:06.879294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.590 qpair failed and we were unable to recover it. 00:29:08.590 [2024-10-30 14:16:06.879700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.590 [2024-10-30 14:16:06.879719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.590 qpair failed and we were unable to recover it. 00:29:08.590 [2024-10-30 14:16:06.879884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.590 [2024-10-30 14:16:06.879904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.590 qpair failed and we were unable to recover it. 00:29:08.590 [2024-10-30 14:16:06.880322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.590 [2024-10-30 14:16:06.880341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.590 qpair failed and we were unable to recover it. 00:29:08.590 [2024-10-30 14:16:06.880696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.590 [2024-10-30 14:16:06.880717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.590 qpair failed and we were unable to recover it. 00:29:08.590 [2024-10-30 14:16:06.881061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.590 [2024-10-30 14:16:06.881082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.590 qpair failed and we were unable to recover it. 00:29:08.590 [2024-10-30 14:16:06.881401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.590 [2024-10-30 14:16:06.881422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.590 qpair failed and we were unable to recover it. 00:29:08.590 [2024-10-30 14:16:06.881756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.590 [2024-10-30 14:16:06.881777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.590 qpair failed and we were unable to recover it. 00:29:08.590 [2024-10-30 14:16:06.882110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.590 [2024-10-30 14:16:06.882132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.590 qpair failed and we were unable to recover it. 00:29:08.590 [2024-10-30 14:16:06.882494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.590 [2024-10-30 14:16:06.882513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.590 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.882904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.882927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.883270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.883291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.883606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.883626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.883937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.883957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.884279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.884299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.884648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.884668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.885021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.885048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.885406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.885433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.885684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.885713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.886147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.886175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.886512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.886539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.886913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.886941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.887294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.887320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.887768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.887797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.888160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.888187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.888558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.888585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.888932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.888959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.889327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.889352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.889812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.889839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.890180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.890207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.890582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.890608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.890977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.891004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.891348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.891375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.891741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.891778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.892211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.892237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.892576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.892604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.892974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.893004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.893365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.893392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.893765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.893794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.894154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.894180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.894555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.894582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.894836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.894863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.895281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.895310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-10-30 14:16:06.895532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-10-30 14:16:06.895561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.895941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.895971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.896337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.896365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.896726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.896780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.897196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.897227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.897466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.897504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.897881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.897911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.898273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.898302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.898669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.898699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.899139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.899171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.899589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.899619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.899978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.900009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.900371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.900400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.900654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.900688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.901111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.901141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.901512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.901540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.901938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.901975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.902302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.902332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.902696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.902726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.903102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.903133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.903368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.903400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.903767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.903798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.904140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.904169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.904524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.904552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.904922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.904953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.905296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.905326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.905734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.905774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.906140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.906171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.906531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.906560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.906921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.906951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.907312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.907342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.907680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.907711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.908131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.908161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.908519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.908550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.908893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.908925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.909233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.909260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.909545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.909576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.909915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.909944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.910275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.910305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.910673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.910701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.911069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.911099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-10-30 14:16:06.911462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-10-30 14:16:06.911495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.911865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.911895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.912258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.912287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.912645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.912673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.913042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.913074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.913412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.913448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.913802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.913833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.914226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.914255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.914615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.914643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.915010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.915039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.915410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.915441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.915806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.915836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.916228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.916257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.916508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.916539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.916900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.916931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.917269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.917300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.917660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.917690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.918043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.918075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.918427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.918457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.918767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.918797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.919150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.919183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.919537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.919568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.919929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.919959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.920336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.920365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.920728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.920783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.921134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.921163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.921521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.921549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.921793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.921826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.922194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.922223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.922585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.922614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.922989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.923019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.923246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.923275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.923646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.923682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.924012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.924041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.924396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.924426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.924790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.924823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.925209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.925237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.925617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.925647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.926028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.926060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.926403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.926432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.926837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.926868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.927241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-10-30 14:16:06.927269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-10-30 14:16:06.927632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.927661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.928014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.928045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.928402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.928433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.928691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.928720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.929113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.929144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.929502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.929531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.929890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.929920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.930289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.930320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.930676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.930705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.931061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.931091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.931355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.931384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.931730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.931770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.932108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.932139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.932476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.932505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.932855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.932888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.933252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.933282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.933631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.933662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.933903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.933933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.934199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.934228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.934588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.934616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.934951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.934982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.935351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.935380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.935645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.935673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.936028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.936058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.936432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.936461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.936818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.936847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.937094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.937127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.937469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.937498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.937872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.937902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.938244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.938274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.938620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.938648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.939027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.939064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.939311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.939341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.939728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.939768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.940135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.940164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.940537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.940565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.940938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.940968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.941319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.941348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.941713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.941742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.942124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.942153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.942516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-10-30 14:16:06.942544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-10-30 14:16:06.942806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.942837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.943188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.943217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.943553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.943582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.943836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.943868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.944240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.944269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.944625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.944653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.945012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.945041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.945412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.945440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.945812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.945841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.946207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.946235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.946584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.946613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.946975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.947006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.947362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.947391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.947733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.947775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.948151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.948179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.948529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.948558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.948920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.948949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.949298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.949333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.949712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.949741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.950114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.950145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.950496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.950524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.950891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.950921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.951278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.951307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.951641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.951671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.952038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.952067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.952425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.952454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.952817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.952846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.953217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.953245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.953679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.953707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.954136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.954167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.954527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.954555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.954918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.954950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.955302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.955332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.955690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.955719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-10-30 14:16:06.956074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-10-30 14:16:06.956105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.956472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.956501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.956870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.956900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.957249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.957278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.957632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.957661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.957991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.958023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.958383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.958414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.958769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.958800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.959152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.959183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.959520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.959550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.959906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.959936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.960297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.960326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.960683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.960711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.961067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.961098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.961462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.961491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.961863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.961892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.962283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.962312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.962674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.962702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.963086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.963117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.963479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.963509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.963875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.963906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.964259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.964289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.964644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.964672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.965041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.965071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.965428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.965471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.965832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.965863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.966249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.966278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.966637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.966668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.967024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.967055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.967410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.967439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.967803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.967833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.968200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.968230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.968568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.968598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.968962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.968992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.969345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.969375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.969737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.969800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.970210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.970240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.970591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.970622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.970981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.971012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.971382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.971413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.971772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.971804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.972164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-10-30 14:16:06.972193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-10-30 14:16:06.972550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.972578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.972952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.972982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.973347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.973376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.973731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.973772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.974119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.974148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.974475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.974505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.974862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.974893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.975296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.975324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.975695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.975723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.976144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.976181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.976550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.976579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.976943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.976974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.977328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.977358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.977714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.977744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.978092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.978121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.978482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.978513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.978886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.978918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.979287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.979316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.979565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.979597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.979874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.979904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.980256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.980286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.980549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.980577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.980932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.980963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.981342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.981371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.981693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.981722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.982094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.982124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.982489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.982517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.982788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.982819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.983079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.983108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.983366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.983395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.983771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.983802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.984183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.984214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.984579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.984607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.985001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.985032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.985371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.985400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.985781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.985811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.986192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.986221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.986600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.986629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.986997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.987027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.987361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.987391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.987738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-10-30 14:16:06.987852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-10-30 14:16:06.988217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.988246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.988676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.988705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.989063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.989093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.989458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.989487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.989833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.989865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.990255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.990284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.990647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.990676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.991026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.991055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.991427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.991457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.991799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.991835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.992133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.992162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.992492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.992522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.992866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.992901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.993186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.993215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.993594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.993623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.994054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.994085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.994431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.994461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.994836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.994867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.995193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.995221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.995523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.995552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.995963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.995994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.996402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.996431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.996795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.996824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.997246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.997276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.997647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.997676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.998028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.998060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.998410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.998440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.998781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.998812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.999152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.999180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.999541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.999570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:06.999927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:06.999957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:07.000311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:07.000340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:07.000616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:07.000645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:07.000975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:07.001005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:07.001355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:07.001384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:07.001770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:07.001801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:07.002160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:07.002189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:07.002575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:07.002604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:07.002945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:07.002975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:07.003197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:07.003230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-10-30 14:16:07.003483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-10-30 14:16:07.003512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.003859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.003889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.004254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.004282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.004669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.004697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.005122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.005153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.005490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.005519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.005873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.005903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.006245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.006275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.006611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.006639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.006978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.007010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.007372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.007401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.007769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.007799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.008147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.008177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.008545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.008574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.008868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.008897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.009250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.009279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.009646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.009675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.010058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.010087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.010453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.010485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.010838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.010870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.011230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.011268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.011601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.011631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.011989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.012020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.012381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.012410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.012780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.012812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.013254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.013283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.013644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.013679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.014071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.014101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.014450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.014480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.014826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.014856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.015230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.015259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.015493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.015521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.015682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.015710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.016116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.016145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.016517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.016548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.016944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.016974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.017331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.017360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-10-30 14:16:07.017646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-10-30 14:16:07.017682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.018057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.018087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.018452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.018482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.018854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.018886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.019241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.019273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.019620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.019650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.020020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.020051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.020409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.020438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.020805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.020835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.021198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.021227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.021604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.021633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.021986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.022015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.022350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.022378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.022768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.022799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.023089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.023118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.023491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.023520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.023882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.023913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.024276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.024305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.024552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.024583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.024932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.024961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.025296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.025326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.025691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.025721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.026070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.026100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.026465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.026495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.026863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.026894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.027277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.027308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.027640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.027670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.028000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.028068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.028454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.028486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.028844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.028874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.029310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.029339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.029682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.029713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.030150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.030181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.030514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.030547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.030868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.030899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.031264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.031294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.031638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.031667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.031998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.032029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.032377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.032405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.032777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.032808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.033184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-10-30 14:16:07.033213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-10-30 14:16:07.033553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.033583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.033933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.033963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.034302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.034332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.034704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.034734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.035118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.035149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.035497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.035525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.035889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.035920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.036340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.036369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.036697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.036726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.037067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.037097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.037451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.037480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.037870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.037899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.038263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.038292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.038548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.038578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.038933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.038964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.039331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.039360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.039718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.039761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.040123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.040153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.040508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.040537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.040895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.040925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.041298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.041327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.041690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.041720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.042092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.042122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.042470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.042500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.042877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.042906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.043289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.043318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.043680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.043709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.044088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.044132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.044466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.044496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.044875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.044905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.045265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.045295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.045663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.045691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.045995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.046024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.046365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.046394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.046767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.046798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.047150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.047180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.047518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.047548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.047917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.047948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.048312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.048340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.048725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.048766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.049114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.049143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-10-30 14:16:07.049512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-10-30 14:16:07.049541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.049778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.049808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.050207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.050236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.050593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.050623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.050923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.050954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.051320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.051349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.051706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.051735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.052095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.052124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.052489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.052521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.052809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.052840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.053199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.053231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.053604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.053633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.054007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.054037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.054372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.054400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.054770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.054800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.055142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.055171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.055545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.055573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.055949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.055979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.056341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.056370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.056709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.056738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.057108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.057137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.057501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.057529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.057765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.057796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.058160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.058190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.058437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.058469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.058839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.058870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.059195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.059225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.059470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.059506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.059902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.059932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.060294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.060323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.060702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.060731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.061076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.061105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.061465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.061493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.061873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.061903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.062283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.062313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.062714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.062758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.063122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.063152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.063515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.063544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.063906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.063937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.064311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.064340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.064707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.064735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-10-30 14:16:07.065100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-10-30 14:16:07.065130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.065510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.065538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.065903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.065933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.066296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.066324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.066696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.066724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.067074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.067103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.067479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.067509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.067879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.067911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.068290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.068318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.068687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.068716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.069137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.069168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.069460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.069488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.069827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.069858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.070228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.070264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.070606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.070635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.070978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.071009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.071381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.071411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.071781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.071813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.072189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.072218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.072578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.072609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.072983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.073014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.073323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.073353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.073745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.073785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.074126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.074157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.074521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.074549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.074897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.074928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.075287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.075315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.075674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.075704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.076085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.076115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.076474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.076504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.076855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.076887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.077251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.077280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.077638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.077667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.078032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.078061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.078401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.078430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.078787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.078819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.079197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.079226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.079590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.079620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.079983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.080013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.080353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.080382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-10-30 14:16:07.080805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-10-30 14:16:07.080838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.081198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.081227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.081585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.081616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.081972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.082002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.082386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.082415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.082825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.082884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.083273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.083304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.083550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.083580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.083919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.083950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.084305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.084335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.084668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.084696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.085073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.085104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.085320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.085349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.085697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.085727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.086092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.086128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.086370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.086402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.086772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.086802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.087045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.087074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.087448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.087478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.087707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.087737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.087990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.088021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.088371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.088400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.088770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.088801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.089166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.089195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.089514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.089547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.089901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.089932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.090301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.090330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.090552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.090580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.090963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.091006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.091331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.091361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.091706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.091736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.091981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.092011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.092402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.092433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.092788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.092820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.093193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.093223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.093553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.093628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.093921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.093952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.094196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.094228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-10-30 14:16:07.094581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-10-30 14:16:07.094612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.094976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.095007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.095292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.095321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.095548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.095587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.095934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.095964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.096212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.096241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.096579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.096610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.096966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.096997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.097370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.097399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.097741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.097787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.098155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.098185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.098413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.098441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.098679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.098710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.098966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.098996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.099356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.099385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.099722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.099765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.100130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.100160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.100515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.100545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.100926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.100957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.101310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.101340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.101695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.101727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.102075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.102105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.102470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.102501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.102768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.102799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.103148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.103177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.103529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.103558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.103821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.103852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.104070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.104100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.104344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.104375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.104730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.104772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.105165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.105194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.105470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.105500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.105720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.105762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.106136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.106166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.106439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.106469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.106688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.106718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.107083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.107116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.107523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.107553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.107811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.107843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.108221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.108250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.108619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.108648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.108896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-10-30 14:16:07.108926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-10-30 14:16:07.109247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.109277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.109530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.109559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.109915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.109953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.110306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.110336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.110564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.110593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.110969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.110999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.111340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.111370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.111623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.111654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.112015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.112045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.112380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.112415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.112663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.112694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.113085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.113116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.113514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.113545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.113908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.113938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.114277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.114307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.114691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.114722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.115141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.115172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.115544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.115572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.116020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.116052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.116433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.116462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.116882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.116914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.117251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.117280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.117626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.117656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.118037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.118068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.118420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.118450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.118836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.118867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.119250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.119280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.119624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.119654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.120022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.120053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.120414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.120444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.120805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.120835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.121187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.121216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.121612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.121643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.121794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.121827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.122236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.122266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.122488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.122518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.122759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.122790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.123088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.123119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.123345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.123376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.123762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.123794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.878 qpair failed and we were unable to recover it. 00:29:08.878 [2024-10-30 14:16:07.124160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.878 [2024-10-30 14:16:07.124191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.124563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.124593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.125015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.125046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.125427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.125458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.125847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.125879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.126287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.126316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.126571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.126603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.126960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.126992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.127392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.127421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.127823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.127854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.128231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.128260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.128593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.128623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.128997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.129029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.129265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.129297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.129560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.129589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.129950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.129981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.130334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.130363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.130726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.130766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.130990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.131019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.131268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.131301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.131573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.131603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.131854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.131886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.132253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.132283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.132625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.132654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.133014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.133046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.133273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.133305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.133655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.133685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.133992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.134022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.134255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.134287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.134539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.134569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.134806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.134846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.135215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.135244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.135641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.135669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.136098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.136128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.136473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.136502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.136874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.136905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.137276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.137305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.137671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.137701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.138090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.138122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.138468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.138498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.138841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.879 [2024-10-30 14:16:07.138874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.879 qpair failed and we were unable to recover it. 00:29:08.879 [2024-10-30 14:16:07.139238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.139267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.139515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.139544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.139902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.139933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.140304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.140333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.140692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.140721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.140997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.141030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.141441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.141470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.141837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.141869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.142221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.142250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.142659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.142688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.143059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.143089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.143445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.143474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.143847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.143879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.144094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.144125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.144497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.144526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.144888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.144918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.145298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.145327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.145692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.145721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.145988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.146022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.146277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.146307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.146700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.146729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.147001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.147033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.147421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.147450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.147829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.147862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.148234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.148264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.148620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.148651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.149021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.149052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.149383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.149413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.149838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.149868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.150223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.150254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.150632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.150661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.151061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.151091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.151367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.151395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.151734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.880 [2024-10-30 14:16:07.151779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:08.880 qpair failed and we were unable to recover it. 00:29:08.880 [2024-10-30 14:16:07.152143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-10-30 14:16:07.152173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.152498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.152531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.152898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.152931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.153200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.153229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.153581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.153610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.153862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.153894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.154305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.154334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.154696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.154724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.155133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.155163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.155534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.155563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.155945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.155977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.156326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.156354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.156718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.156761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.157115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.157143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.157482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.157510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.157875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.157908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.158182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.158212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.158576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.158605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.158982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.159013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.159387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.159416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.159714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.159743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.160109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.160138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.160510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.160539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.160906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.160942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.161300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.161329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.161649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.161679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.162037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.162069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.162376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.162406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.162772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.162803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.163065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.163094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.163431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.163461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.163761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.163791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.164128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.164158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.164522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.164552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.164971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.165002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.165363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.165393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.165770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.165801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.166211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.166241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.166586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.166616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.166992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.167023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-10-30 14:16:07.167359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-10-30 14:16:07.167389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.167765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.167796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.168066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.168094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.168470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.168499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.168863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.168894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.169257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.169287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.169661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.169690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.170052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.170081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.170450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.170479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.170841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.170873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.171231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.171260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.171624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.171654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.172080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.172110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.172464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.172492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.172838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.172868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.173219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.173250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.173618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.173650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.173903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.173933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.174317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.174346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.174704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.174733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.175162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.175194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.175459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.175489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.175825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.175857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.176163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.176193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.176528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.176564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.176837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.176866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.177225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.177254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.177606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.177637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.178007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.178037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.178374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.178406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.178774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.178805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.179065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.179094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.179469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.179497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.179856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.179889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.180255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.180284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.180642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.180671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.181014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.181045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.181274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.181303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.181565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.181595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.181933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.181963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.182306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.182336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-10-30 14:16:07.182668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-10-30 14:16:07.182699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.183087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.183118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.183470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.183498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.183863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.183895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.184245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.184274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.184650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.184681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.185042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.185074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.185440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.185470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.185829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.185860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.186260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.186289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.186538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.186574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.186943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.186974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.187253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.187282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.187631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.187660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.187909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.187942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.188201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.188230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.188560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.188590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.188970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.189002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.189344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.189374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.189729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.189772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.190105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.190143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.190489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.190519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.190867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.190897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.191288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.191318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.191671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.191703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.192075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.192106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.192460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.192490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.192856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.192888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.193245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.193273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.193631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.193662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.194019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.194050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.194406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.194436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.194803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.194833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.195170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.195198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.195577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.195608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.195973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.196004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.196359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.196388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.196762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.196793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.197179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.197209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.197588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.197619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.198029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.198062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-10-30 14:16:07.198416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-10-30 14:16:07.198447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.198809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.198839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.199195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.199225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.199596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.199625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.199994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.200024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.200415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.200445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.200796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.200827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.201207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.201236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.201567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.201598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.201975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.202006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.202363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.202398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.202767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.202798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.203157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.203186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.203542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.203571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.203949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.203980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.204337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.204366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.204735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.204779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.205136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.205164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.205523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.205552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.205899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.205931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.206217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.206246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.206477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.206508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.206864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.206896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.207261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.207290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.207655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.207683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.208047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.208079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.208421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.208451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.208826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.208856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.209207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.209238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.209555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.209584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.209972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.210004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.210357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.210387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.210736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.210778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.211121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.211151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.211522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.211553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.211896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.211926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.212293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.212323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.212669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.212710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.213093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.213124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.213494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.213523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.213879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.213909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-10-30 14:16:07.215903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-10-30 14:16:07.215976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.216387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.216423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.216782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.216813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.217188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.217217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.217579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.217610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.218011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.218044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.218394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.218424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.218677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.218707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.219059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.219092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.219439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.219470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.219841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.219873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.220220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.220249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.220609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.220638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.220979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.221010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.221384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.221415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.221780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.221811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.222172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.222201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.222573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.222602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.222941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.222970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.223319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.223348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.223708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.223739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.223993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.224026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.224378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.224407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.224767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.224799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.225157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.225186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.225557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.225588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.225957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.225989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.226355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.226384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.226820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.226850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.227206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.227237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.227600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.227629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.227885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.227914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.228284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.228314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.228563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.228591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-10-30 14:16:07.228833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-10-30 14:16:07.228863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.229228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.229257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.229626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.229655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.230017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.230053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.230283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.230312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.230738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.230782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.231111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.231149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.231476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.231504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.231802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.231831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.232183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.232212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.232574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.232606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.232964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.232994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.233352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.233381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.233737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.233786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.234124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.234153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.234513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.234544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.234901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.234933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.235301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.235331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.235690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.235719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.236167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.236197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.236552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.236580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.237022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.237053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.237396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.237427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.237781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.237812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.238210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.238239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.238485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.238517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.238877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.238908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.239304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.239333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.239690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.239719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.240112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.240142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.240483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.240519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.240873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.240903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.241268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.241297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.241663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.241693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.242070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.242100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.242452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.242481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.242844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.242875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.243221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.243250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.243502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.243534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.243983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.244014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.244357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.244387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-10-30 14:16:07.244770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-10-30 14:16:07.244800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.245137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.245166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.245611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.245640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.246011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.246042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.246402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.246432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.246785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.246817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.247158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.247188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.247560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.247589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.247966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.247996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.248353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.248383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.248738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.248782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.249171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.249201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.249565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.249595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.249939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.249969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.250276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.250311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.250715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.250745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.251132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.251161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.251524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.251554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.251922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.251954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.252322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.252351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.252680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.252710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.253080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.253111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.253454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.253485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.253824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.253854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.254109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.254138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.254494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.254525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.254938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.254970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.255330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.255359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.255728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.255769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.256159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.256190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.256445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.256480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.256786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.256817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.257190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.257219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.257582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.257610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.257984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.258015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.258366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.258395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.258784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.258814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.259159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.259188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.259539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.259569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.259932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.259962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.260316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.260346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-10-30 14:16:07.260705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-10-30 14:16:07.260737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.261068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.261098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.261449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.261479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.261872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.261904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.262254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.262283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.262647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.262678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.263039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.263069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.263432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.263461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.263837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.263868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.264226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.264255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.264613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.264642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.265034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.265065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.265426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.265455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.265804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.265836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.266208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.266237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.266611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.266639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.267006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.267036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.267379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.267411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.267733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.267772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.268196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.268225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.268563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.268592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.268920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.268950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.269197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.269229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.269472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.269502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.269773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.269804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.270189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.270218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.270543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.270573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.270861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.270892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.271271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.271300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.271666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.271695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.272089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.272121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.272497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.272525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.272882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.272911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.273291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.273320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.273677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.273706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.274104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.274135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.274495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.274524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.274888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.274920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.275300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.275329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.275638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.275668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.276034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.276064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-10-30 14:16:07.276427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-10-30 14:16:07.276457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.276817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.276847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.277219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.277248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.277587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.277617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.277962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.277992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.278364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.278394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.278762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.278795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.279169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.279198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.279520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.279550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.279895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.279926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.280264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.280294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.280655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.280686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.281020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.281052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.281406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.281435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.281792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.281822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.282147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.282175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.282577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.282611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.282982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.283013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.283381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.283411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.283672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.283701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.284066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.284097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.284461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.284490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.284905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.284937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.285286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.285315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.285570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.285599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.285928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.285958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.286313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.286343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.286707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.286737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.287079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.287109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.287477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.287508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.287870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.287902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.288272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.288301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.288645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.288674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.289038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.289069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.289428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.289457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.289820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.289850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-10-30 14:16:07.290221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-10-30 14:16:07.290250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.290509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.290538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.290930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.290960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.291332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.291361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.291730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.291769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.292005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.292034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.292392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.292423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.292769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.292800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.293192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.293221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.293579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.293608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.293978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.294009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.294364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.294395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.294638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.294668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.295035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.295065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.295427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.295457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.295818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.295847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.296200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.296230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.296590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.296621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.296977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.297008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.297262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.297294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.297665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.297694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.298032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.298070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.298423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.298454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.298802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.298834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.299229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.299258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.299615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.299644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.299977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.300008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.300359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.300388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.300740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.300780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.301138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.301168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.301511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.301540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.301893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.301926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.302288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.302317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.302681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.302711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.303194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.303224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.303583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.303614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.303991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.304022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.304402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.304431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.304790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.304821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.305205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.305235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.305580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.305611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-10-30 14:16:07.305862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-10-30 14:16:07.305895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.306261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.306291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.306534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.306562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.306952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.306983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.307342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.307371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.307737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.307781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.308151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.308182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.308511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.308546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.308903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.308936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.309295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.309324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.309682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.309711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.310145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.310176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.310527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.310556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.310925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.310956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.311317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.311345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.311608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.311636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.312039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.312069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.312443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.312474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.312831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.312862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.313224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.313253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.313589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.313619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.313956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.313986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.314227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.314256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.314609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.314640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.314995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.315025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.315375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.315404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.315769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.315799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.316128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.316157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.316520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.316550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.316918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.316949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.317312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.317341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.317714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.317743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.318179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.318208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.318573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.318601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.318988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.319019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.319367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.319398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.319760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.319791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.320126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.320157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.320494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.320524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.320885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.320916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.321280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.321309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-10-30 14:16:07.321668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-10-30 14:16:07.321697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.322054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.322086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.322458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.322487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.322851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.322880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.323247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.323276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.323626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.323657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.323998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.324029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.324405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.324440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.324777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.324807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.325185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.325214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.325558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.325588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.325930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.325961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.326325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.326355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.326723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.326764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.327121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.327151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.327511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.327539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.327971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.328001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.328351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.328381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.328732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.328787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.329136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.329166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.329528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.329558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.329923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.329954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.330317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.330348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.330717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.330758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.331104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.331133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.331486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.331515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.331857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.331888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.332234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.332263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.332614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.332645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.333005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.333036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.333302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.333330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.333672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.333701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.334004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.334034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.334460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.334490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.334831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.334868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.335206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.335237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.335476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.335505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.335859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.335890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.336211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.336239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.336606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.336635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.336993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.337025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-10-30 14:16:07.337361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-10-30 14:16:07.337390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.337658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.337689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.338028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.338058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.338415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.338446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.338809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.338842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.339214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.339245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.339603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.339632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.339983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.340022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.340367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.340397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.340758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.340789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.341146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.341176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.341434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.341464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.341803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.341834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.342142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.342171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.342522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.342551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.342893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.342930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.343276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.343305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.343655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.343685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.344057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.344088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.344456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.344485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.344855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.344886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.345255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.345286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.345628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.345659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.346027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.346057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.346407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.346439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.346826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.346859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.347251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.347280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.347533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.347562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.347919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.347950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.348320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.348351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.348625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.348654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.349113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.349143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.349519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.349548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.349906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.349935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.350186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.350223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.350454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.350483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.350825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-10-30 14:16:07.350856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-10-30 14:16:07.351127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.351157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.351374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.351404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.351781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.351811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.352192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.352223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.352575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.352606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.352958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.352989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.353244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.353274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.353618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.353647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.353895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.353925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.354304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.354334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.354700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.354730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.355111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.355142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.355503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.355533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.355885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.355916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.356282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.356313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.356675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.356704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.356953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.356985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.357359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.357389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.358637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.358695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.359096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.359131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.359473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.359504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.359869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.359900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.360280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.360310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.360680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.360709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.361082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.361123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.361499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.361527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.361769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.361799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.362153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.362182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.362482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.362512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.362869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.362900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.363257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.363287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.363638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.363669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.364042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.364072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.364433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.364464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.364713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.364742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.364993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.365028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.365372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.365403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.365657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.365688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.365958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.365990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.366385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.366417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-10-30 14:16:07.366795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-10-30 14:16:07.366826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.367197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.367226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.367585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.367613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.368020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.368052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.368461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.368489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.368830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.368861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.369214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.369243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.369642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.369672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.370081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.370114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.370480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.370510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.370870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.370901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.371285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.371314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.371579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.371608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.371972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.372005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.372255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.372285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.372649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.372679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.373069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.373100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.373453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.373481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.373832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.373861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.374268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.374299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.374651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.374680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.375035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.375066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.375439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.375469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.375829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.375859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.376266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.376296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.376554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.376593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.376981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.377012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.377240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.377269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.377660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.377689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.378058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.378090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.378458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.378487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.378929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.378959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.379262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.379296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.379672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.379701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.379865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.379895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.380235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.380263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.380588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.380617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.380972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.381002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.381256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.381287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.381535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-10-30 14:16:07.381564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-10-30 14:16:07.381910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.381938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.382330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.382359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.382712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.382741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.383008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.383037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.383390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.383419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.383793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.383825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.384238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.384268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.384589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.384618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.384968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.384999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.385358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.385389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.385604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.385635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.386003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.386033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.386385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.386414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.386798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.386829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.387181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.387210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.387569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.387598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.387842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.387872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.388247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.388276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.388639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.388667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.389026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.389056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.389407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.389436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.389790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.389822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.390211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.390240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.390431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.390459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.390822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.390852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.391004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.391032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.391392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.391421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.391766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.391798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.392187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.392216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.392554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.392583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.392972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.393003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.393339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.393369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.393740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.393781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.394137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.394167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.394538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.394569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.394923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.394955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.395296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.395324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.395700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.395729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.396007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.396037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.396343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.396373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.396741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-10-30 14:16:07.396787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-10-30 14:16:07.397171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.397202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.397558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.397588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.397930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.397960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.398324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.398353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.398721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.398772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.399129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.399157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.399503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.399533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.399880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.399912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.400251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.400279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.400691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.400719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.401062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.401092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.401457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.401488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.401850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.401889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.402254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.402283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.402647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.402675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.403032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.403062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.403401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.403429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.403786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.403818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.404196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.404225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.404580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.404609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.404926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.404956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.405320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.405348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.405707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.405737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.406114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.406143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.406500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.406531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.406861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.406892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.407248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.407276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.407649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.407677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.408049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.408078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.408440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.408469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.408860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.408890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.409189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.409217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.409560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.409588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.409976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.410005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.410367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.410396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.410829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.410862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.411201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.411230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.411607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.411635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.411980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.412010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.412372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.412401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.412775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.412808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-10-30 14:16:07.413169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-10-30 14:16:07.413198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-10-30 14:16:07.413565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-10-30 14:16:07.413593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-10-30 14:16:07.413861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-10-30 14:16:07.413890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-10-30 14:16:07.414271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-10-30 14:16:07.414300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-10-30 14:16:07.414659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-10-30 14:16:07.414688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-10-30 14:16:07.415072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-10-30 14:16:07.415102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-10-30 14:16:07.415460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-10-30 14:16:07.415491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-10-30 14:16:07.415759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-10-30 14:16:07.415790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-10-30 14:16:07.416153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-10-30 14:16:07.416182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-10-30 14:16:07.416546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-10-30 14:16:07.416574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.416947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.416977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.417324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.417353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.417691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.417722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.418095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.418125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.418484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.418513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.418876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.418905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.419263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.419292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.419662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.419692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.420146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.420177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.420593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.420622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.420999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.421029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.421391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.421420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.421778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.421808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.422168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.422198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.422557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.422586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.422968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.422997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.423365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.423395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.423768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.423799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.424160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.424189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.424439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.424473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.424821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.424852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.425227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.425256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.425616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.425644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.426004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.426034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.426389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.426417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.426784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.426815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.427197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.427226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.427582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.427610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.427939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.427968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.428328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.428373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.428765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.428798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.429174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.429203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.429571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.429599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.429971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.430002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.430356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.430384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.430734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.430777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.431179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.431209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.431565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.431594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.431881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.431911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.432287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.432315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-10-30 14:16:07.432671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-10-30 14:16:07.432699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-10-30 14:16:07.433062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-10-30 14:16:07.433092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-10-30 14:16:07.433449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-10-30 14:16:07.433479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-10-30 14:16:07.433843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-10-30 14:16:07.433874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-10-30 14:16:07.434271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-10-30 14:16:07.434299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-10-30 14:16:07.434646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-10-30 14:16:07.434675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-10-30 14:16:07.435030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-10-30 14:16:07.435060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-10-30 14:16:07.435417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-10-30 14:16:07.435445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-10-30 14:16:07.435814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-10-30 14:16:07.435844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-10-30 14:16:07.437727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-10-30 14:16:07.437815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-10-30 14:16:07.438255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-10-30 14:16:07.438291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-10-30 14:16:07.440002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-10-30 14:16:07.440065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-10-30 14:16:07.440503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-10-30 14:16:07.440538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-10-30 14:16:07.440899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-10-30 14:16:07.440931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-10-30 14:16:07.441283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-10-30 14:16:07.441313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-10-30 14:16:07.441683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-10-30 14:16:07.441711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-10-30 14:16:07.442148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-10-30 14:16:07.442179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.442536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.442570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.442915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.442945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.443384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.443415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.443790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.443820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.444280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.444309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.444661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.444689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.445092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.445123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.445374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.445406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.445810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.445841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.446197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.446227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.446589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.446617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.446994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.447023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.447390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.447418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.447785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.447823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.448203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.448232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.448595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.448624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.449062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.449092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.449493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.449522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.449882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.449913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.450274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.450304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.450665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.450694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.451064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.451094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.451468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.451497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.451856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.451886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.452260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.452289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.452685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.452716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.449 [2024-10-30 14:16:07.453108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.449 [2024-10-30 14:16:07.453138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.449 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.453497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.453528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.453884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.453918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.454216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.454245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.454672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.454703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.455045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.455076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.455449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.455477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.455839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.455870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.456228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.456257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.456623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.456651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.457017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.457047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.457403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.457432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.457790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.457820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.458201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.458230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.458659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.458694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.459058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.459088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.459461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.459490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.459872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.459902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.460153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.460181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.460551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.460580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.460912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.460943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.461317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.461346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.461602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.461630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.461997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.462028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.462386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.462415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.462792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.462823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.463172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.463200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.463544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.463574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.463931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.463963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.464320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.464348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.464693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.464722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.465072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.465102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.465466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.465495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.465914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.465945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.466287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.466326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.466656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.466684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.467055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.467085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.467443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.467472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.467830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.467860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.468199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.468229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.468572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.468601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.450 [2024-10-30 14:16:07.468981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.450 [2024-10-30 14:16:07.469010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.450 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.469272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.469301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.469670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.469699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.470101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.470132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.470509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.470539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.470789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.470820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.471173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.471201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.471573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.471603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.471973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.472003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.472359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.472388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.472760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.472790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.473233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.473261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.473598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.473628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.473992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.474022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.474266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.474301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.474651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.474681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.475079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.475111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.475362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.475390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.475794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.475824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.476219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.476247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.476601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.476631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.476985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.477015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.477372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.477401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.477770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.477799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.478167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.478196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.478547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.478577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.478925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.478956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.479403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.479434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.479788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.479819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.480194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.480223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.480576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.480605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.480982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.481011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.481369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.481398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.481785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.481815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.482162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.482191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.482556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.482586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.482961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.482991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.483334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.483363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.483722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.483762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.484125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.484153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.484509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.484538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-10-30 14:16:07.484893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-10-30 14:16:07.484930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.485296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.485325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.485687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.485716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.486068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.486098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.486444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.486473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.486831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.486862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.487221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.487250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.487591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.487621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.487984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.488014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.488394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.488424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.488780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.488809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.489171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.489200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.489568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.489597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.489963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.489992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.490334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.490363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.490726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.490769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.491128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.491156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.491505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.491534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.491886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.491917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.492299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.492328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.492663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.492693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.492945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.492975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.493334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.493364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.493736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.493777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.494130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.494159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.494503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.494532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.494872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.494902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.495226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.495255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.495617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.495647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.495995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.496026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.496387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.496417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.496778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.496808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.497169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.497198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.497555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.497585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.497927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.497956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.498325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.498353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.498717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.498757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.499124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.499153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.499395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.499428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.499771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.499802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.500152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-10-30 14:16:07.500181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-10-30 14:16:07.500542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.500578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.500928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.500959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.501334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.501363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.501730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.501769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.502138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.502169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.502527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.502555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.502899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.502929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.503298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.503327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.503687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.503717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.504103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.504136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.504494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.504523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.504891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.504921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.505230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.505258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.505598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.505626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.506010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.506040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.506393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.506422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.506795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.506825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.507204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.507233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.507597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.507624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.507954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.507984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.508343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.508372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.508771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.508804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.509164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.509193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.509558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.509587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.509853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.509882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.510252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.510281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.510641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.510671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.511010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.511045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.511295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.511324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.511669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.511699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.512047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.512077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.512437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.512467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.512837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.512868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.513225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-10-30 14:16:07.513254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-10-30 14:16:07.513622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.513652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.514026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.514057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.514418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.514446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.514842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.514872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.515127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.515155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.515500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.515528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.515898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.515930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.516155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.516184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.516553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.516582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.516918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.516948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.517324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.517352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.517647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.517675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.518033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.518063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.518434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.518464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.518717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.518757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.519098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.519128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.519488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.519517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.519780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.519811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.520155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.520183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.520544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.520574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.520952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.520983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.521331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.521359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.521719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.521761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.522129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.522159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.522531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.522560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.522900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.522931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.523305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.523334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.523691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.523720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.524128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.524158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.524513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.524541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.524914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.524945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.525305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.525335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.525665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.525695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.526038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.526069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.526326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.526360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.526723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.526764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.527020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.527049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.527357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.527387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.527655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.527685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.527945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.527975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.528358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-10-30 14:16:07.528387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-10-30 14:16:07.528715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.528766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.529115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.529144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.529497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.529528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.529887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.529919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.530319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.530348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.530701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.530731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.531114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.531144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.531483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.531512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.531767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.531799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.532186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.532216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.532560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.532590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.532942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.532972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.533334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.533363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.533729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.533770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.534013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.534042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.534396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.534425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.534730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.534771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.535207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.535236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.535581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.535611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.535971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.536001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.536360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.536389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.536839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.536870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.537229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.537257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.537622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.537652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.538011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.538041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.538344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.538381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.538770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.538802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.539195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.539224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.539593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.539621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.539993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.540023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.540358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.540387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.540667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.540697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.541047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.541076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.541485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.541514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.541884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.541921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.542280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.542309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.542669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.542698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.543050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.543081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.543426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.543456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.543825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.543855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.544223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.544252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-10-30 14:16:07.544606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-10-30 14:16:07.544636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.545066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.545096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.545431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.545461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.545810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.545842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.546217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.546246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.546612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.546640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.546999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.547028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.547406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.547436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.547803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.547834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.548204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.548233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.548595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.548623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.548971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.549001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.549368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.549396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.549571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.549599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.549925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.549956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.550314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.550342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.550709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.550737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.551107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.551137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.551496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.551524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.551780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.551813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.552163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.552199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.552537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.552567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.552899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.552928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.553302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.553332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.553688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.553717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.554095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.554125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.554494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.554524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.554814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.554845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.555220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.555250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.555669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.555697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.556071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.556101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.556360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.556389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.556766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.556796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.557085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.557113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.557468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.557498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.557781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.557813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.558182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.558212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.558552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.558581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.558952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.558983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.559347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.559375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.559732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.559773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-10-30 14:16:07.560124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-10-30 14:16:07.560152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.560399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.560431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.560789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.560820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.561202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.561231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.561597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.561625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.561974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.562005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.562408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.562438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.562794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.562825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.563202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.563230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.563646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.563674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.563911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.563944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.564364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.564394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.564771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.564801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.565158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.565187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.565564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.565593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.565930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.565959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.566338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.566368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.566728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.566771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.567203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.567233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.567585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.567614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.567980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.568017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.568368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.568398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.568770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.568800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.569159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.569188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.569528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.569558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.569924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.569953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.570207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.570238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.570643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.570673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.571001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.571034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.571352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.571382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.571738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.571782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.572121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.572149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.572424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.572453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.572803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.572833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.573106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.573135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.573538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.573567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.573924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-10-30 14:16:07.573953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-10-30 14:16:07.574322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.574350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.574723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.574762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.575115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.575144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.575484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.575514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.575880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.575911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.576175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.576203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.576576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.576605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.576966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.576997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.577358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.577387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.577764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.577796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.578157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.578192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.578573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.578601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.578974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.579003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.579368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.579397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.579771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.579801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.580134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.580163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.580419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.580447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.580788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.580819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.581125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.581154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.581505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.581533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.581877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.581906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.582255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.582284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.582636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.582668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.584475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.584534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.584889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.584925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.585183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.585213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.585547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.585576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.585924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.585954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.586307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.586338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.586670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.586699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.587111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.587142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.587520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.587548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.587908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.587938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.588191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.588220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.588592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.588621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.588985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.589018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.589366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.589395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.589676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.589704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.590059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.590090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.590444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.590474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-10-30 14:16:07.590843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-10-30 14:16:07.590873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.591252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.591281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.591637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.591666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.592012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.592043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.592394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.592425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.592776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.592808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.593173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.593202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.593567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.593596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.593991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.594021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.594364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.594401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.594764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.594795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.595090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.595125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.595486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.595516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.595875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.595905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.596275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.596306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.596673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.596702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.597097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.597128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.597493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.597525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.597799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.597830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.598218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.598247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.598611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.598640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.599049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.599082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.599427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.599456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.599821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.599851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.600244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.600274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.600611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.600641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.600993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.601024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.601387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.601417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.601666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.601697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.602089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.602119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.602486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.602516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.602915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.602946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.603292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.603322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.603710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.603740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.604114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.604148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.604496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.604525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.604870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.604899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.605275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.605306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.605474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.605509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.605890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.605923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.606311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.606342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-10-30 14:16:07.606689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-10-30 14:16:07.606720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.607111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.607141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.607499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.607528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.607913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.607945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.608318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.608347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.608558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.608588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.608979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.609009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.609281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.609310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.609586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.609614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.609957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.609987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.610367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.610397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.610775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.610808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.611178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.611208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.611591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.611620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.612091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.612122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.612485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.612513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.612892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.612923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.613280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.613310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.613575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.613604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.613866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.613895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.614260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.614292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.614559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.614587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.614976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.615006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.615402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.615432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.615599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.615627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.616006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.616040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.616395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.616423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.616829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.616861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.617226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.617255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.617586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.617615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.618037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.618069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.618523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.618554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.618946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.618976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.619382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.619413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.619758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.619789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.620060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.620090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.620319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.620348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.620763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.620793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.621107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.621142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.621389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.621418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.621733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.621773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-10-30 14:16:07.622155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-10-30 14:16:07.622185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.622541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.622571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.622928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.622959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.623317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.623346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.623695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.623723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.623910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.623942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.624310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.624339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.624612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.624641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.624962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.624992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.625243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.625274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.625641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.625670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.626052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.626084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.626451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.626480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.626740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.626785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.627156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.627186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.627456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.627486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.627891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.627921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.628295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.628325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.628688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.628717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.629095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.629125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.629464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.629494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.629879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.629910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.630267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.630297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.630553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.630582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.630995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.631031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.631267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.631299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.631660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.631688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.632084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.632115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.632469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.632498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.632877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.632907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.633289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.633318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.633684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.633713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.633985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.634015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.634370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.634401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.634654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.634683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.634842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.634875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.635279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.635309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.635541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.635574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.635963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.635995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.636323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.636352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.636712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.636742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.637200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-10-30 14:16:07.637231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-10-30 14:16:07.637559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.637589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.637968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.637998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.638403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.638432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.638778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.638807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.639071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.639104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.639492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.639522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.639808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.639839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.640228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.640257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.640593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.640623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.640782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.640813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.641246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.641276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.641706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.641736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.642111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.642142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.642493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.642522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.642867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.642897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.643258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.643286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.643651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.643680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.644050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.644081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.644441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.644470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.644831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.644860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.645206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.645234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.645613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.645641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.645991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.646021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.646388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.646425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.646769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.646798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.647169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.647198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.647562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.647591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.647927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.647957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.648318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.648347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.648728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.648781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.649149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.649177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.649550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.649579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.649912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.649942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.650320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.650348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.650715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.650743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.651126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.651157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-10-30 14:16:07.651518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-10-30 14:16:07.651546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.651807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.651838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.652182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.652213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.652551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.652580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.652915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.652945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.653302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.653332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.653681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.653710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.654101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.654131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.654495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.654525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.654768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.654798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.655144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.655173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.655519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.655549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.655912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.655944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.656324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.656353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.656710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.656738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.657147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.657177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.657534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.657563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.657926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.657957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.658311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.658340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.658700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.658729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.659112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.659142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.659509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.659538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.659881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.659913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.660292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.660323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.660682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.660711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.661084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.661115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.661493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.661523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.661782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.661811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.662202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.662231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.662579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.662611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.662970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.663001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.663359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.663388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.663764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.663795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.664137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.664166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.664526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.664556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.665006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.665040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.665389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.665420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.665804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.665835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.666099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.666127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.666466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.666495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.666862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.666893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-10-30 14:16:07.667294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-10-30 14:16:07.667325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.667644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.667674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.668040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.668071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.668432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.668460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.668836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.668866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.669218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.669247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.669636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.669664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.670043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.670075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.670231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.670264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.670629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.670660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.671012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.671042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.671375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.671404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.671769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.671800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.672055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.672084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.672422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.672461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.672843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.672874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.673097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.673129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.673490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.673519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.673778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.673808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.674083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.674112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.674499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.674528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.674880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.674909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.675281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.675311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.675560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.675588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.675948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.675978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.676339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.676368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.676685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.676714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.677092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.677124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.677361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.677390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.677745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.677791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.678164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.678193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.678558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.678586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.678934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.678964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.679328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.679356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.679718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.679758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.680139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.680168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.680513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.680543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.680914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.680943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.681353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.681382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.681740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.681795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.682140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.682170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-10-30 14:16:07.682530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-10-30 14:16:07.682559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.682915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.682945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.683346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.683375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.683731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.683771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.684128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.684158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.684547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.684577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.684929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.684959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.685313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.685342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.685704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.685732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.686103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.686134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.686494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.686523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.686868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.686901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.687264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.687292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.687656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.687686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.688054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.688085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.688462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.688491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.688868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.688899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.689309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.689338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.689689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.689719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.690107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.690136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.690405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.690432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.690783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.690815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.691155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.691185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.691556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.691585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.691944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.691974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.692387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.692417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.692780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.692811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.693168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.693197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.693443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.693474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.693818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.693849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.694223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.694251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.694606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.694634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.694993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.695023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.695357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.695386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.695716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.695757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.696116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.696146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.696548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.696577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.696936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.696966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.697330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.697360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.697695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.697727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.698101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.698131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-10-30 14:16:07.698495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-10-30 14:16:07.698532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.698787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.698819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.699176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.699205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.699580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.699609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.699991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.700023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.700376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.700405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.700771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.700801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.701052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.701082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.701422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.701452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.701821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.701851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.702190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.702221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.702468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.702500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.702865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.702896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.703260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.703290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.703654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.703684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.704044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.704073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.704517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.704546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.704908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.704938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.705298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.705328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.705696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.705726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.706103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.706133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.706490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.706519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.706908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.706939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.707316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.707345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.707709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.707738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.708081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.708110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.708471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.708501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.708871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.708901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.709267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.709297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.709655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.709684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.710050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.710080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.710449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.710477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.710834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.710864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.711128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.711160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.711541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.711570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.711926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.711955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.712322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-10-30 14:16:07.712350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-10-30 14:16:07.712708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.712736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.713108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.713137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.713543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.713571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.713926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.713957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.714189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.714224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.714574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.714605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.714969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.714999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.715366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.715395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.715775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.715804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.716196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.716225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.716630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.716659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.717001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.717032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.717405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.717435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.717831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.717861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.718114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.718145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.718495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.718524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.718887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.718918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.719254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.719283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.719637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.719666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.720039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.720068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.720433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.720461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.720823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.720854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.721201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.721230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.721459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.721491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.721851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.721881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.722127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.722158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.722528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.722557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.722930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.722960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.723394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.723424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.723777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.723808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.724062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.724090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.724475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.724510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.724867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.724897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.725286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.725315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.725678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.725707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.726209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.726249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.726541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.726576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.726969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.727001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.727363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.727392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.727764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.727795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.728189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-10-30 14:16:07.728219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-10-30 14:16:07.728577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-10-30 14:16:07.728606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-10-30 14:16:07.728972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-10-30 14:16:07.729002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-10-30 14:16:07.729272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-10-30 14:16:07.729300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-10-30 14:16:07.729642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-10-30 14:16:07.729671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-10-30 14:16:07.730012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-10-30 14:16:07.730043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-10-30 14:16:07.730371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-10-30 14:16:07.730399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-10-30 14:16:07.730778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-10-30 14:16:07.730810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-10-30 14:16:07.731214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-10-30 14:16:07.731242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-10-30 14:16:07.731577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-10-30 14:16:07.731605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-10-30 14:16:07.731957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-10-30 14:16:07.731988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-10-30 14:16:07.732244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-10-30 14:16:07.732274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-10-30 14:16:07.732498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-10-30 14:16:07.732527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-10-30 14:16:07.732878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-10-30 14:16:07.732908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-10-30 14:16:07.733275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-10-30 14:16:07.733304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-10-30 14:16:07.733671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-10-30 14:16:07.733700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-10-30 14:16:07.734061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-10-30 14:16:07.734092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-10-30 14:16:07.734440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-10-30 14:16:07.734469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-10-30 14:16:07.734835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-10-30 14:16:07.734867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-10-30 14:16:07.735237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-10-30 14:16:07.735266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-10-30 14:16:07.735625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-10-30 14:16:07.735654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-10-30 14:16:07.736021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-10-30 14:16:07.736050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-10-30 14:16:07.736388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-10-30 14:16:07.736416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-10-30 14:16:07.736789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-10-30 14:16:07.736819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-10-30 14:16:07.737203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-10-30 14:16:07.737232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.742 [2024-10-30 14:16:07.737584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.742 [2024-10-30 14:16:07.737615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.742 qpair failed and we were unable to recover it. 00:29:09.742 [2024-10-30 14:16:07.737970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.742 [2024-10-30 14:16:07.738000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.742 qpair failed and we were unable to recover it. 00:29:09.742 [2024-10-30 14:16:07.738242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.742 [2024-10-30 14:16:07.738271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.742 qpair failed and we were unable to recover it. 00:29:09.742 [2024-10-30 14:16:07.738531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.742 [2024-10-30 14:16:07.738559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.742 qpair failed and we were unable to recover it. 00:29:09.742 [2024-10-30 14:16:07.738804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.742 [2024-10-30 14:16:07.738835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.742 qpair failed and we were unable to recover it. 00:29:09.742 [2024-10-30 14:16:07.739217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.742 [2024-10-30 14:16:07.739246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.742 qpair failed and we were unable to recover it. 00:29:09.742 [2024-10-30 14:16:07.739628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.742 [2024-10-30 14:16:07.739657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.742 qpair failed and we were unable to recover it. 00:29:09.742 [2024-10-30 14:16:07.740047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.742 [2024-10-30 14:16:07.740084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.742 qpair failed and we were unable to recover it. 00:29:09.742 [2024-10-30 14:16:07.740435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.742 [2024-10-30 14:16:07.740463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.742 qpair failed and we were unable to recover it. 00:29:09.742 [2024-10-30 14:16:07.740815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.742 [2024-10-30 14:16:07.740845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.742 qpair failed and we were unable to recover it. 00:29:09.742 [2024-10-30 14:16:07.741212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.742 [2024-10-30 14:16:07.741241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.742 qpair failed and we were unable to recover it. 00:29:09.742 [2024-10-30 14:16:07.741607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.742 [2024-10-30 14:16:07.741637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.742 qpair failed and we were unable to recover it. 00:29:09.742 [2024-10-30 14:16:07.742001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.742 [2024-10-30 14:16:07.742031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.742 qpair failed and we were unable to recover it. 00:29:09.742 [2024-10-30 14:16:07.742295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.742 [2024-10-30 14:16:07.742322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.742 qpair failed and we were unable to recover it. 00:29:09.742 [2024-10-30 14:16:07.742689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.742 [2024-10-30 14:16:07.742718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.742 qpair failed and we were unable to recover it. 00:29:09.742 [2024-10-30 14:16:07.743092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.742 [2024-10-30 14:16:07.743122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.742 qpair failed and we were unable to recover it. 00:29:09.742 [2024-10-30 14:16:07.743475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.742 [2024-10-30 14:16:07.743504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.742 qpair failed and we were unable to recover it. 00:29:09.742 [2024-10-30 14:16:07.743874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.742 [2024-10-30 14:16:07.743905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.742 qpair failed and we were unable to recover it. 00:29:09.742 [2024-10-30 14:16:07.744277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.742 [2024-10-30 14:16:07.744307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.742 qpair failed and we were unable to recover it. 00:29:09.742 [2024-10-30 14:16:07.744664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.742 [2024-10-30 14:16:07.744693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.742 qpair failed and we were unable to recover it. 00:29:09.742 [2024-10-30 14:16:07.745075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.742 [2024-10-30 14:16:07.745106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.742 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.745404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.745440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.745795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.745825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.746188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.746217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.746553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.746581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.746938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.746969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.747306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.747335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.747684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.747712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.748063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.748093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.748451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.748480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.748857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.748888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.749287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.749317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.749565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.749594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.749975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.750005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.750369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.750404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.750756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.750788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.751043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.751072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.751456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.751485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.751847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.751877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.752246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.752275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.752510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.752539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.752900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.752930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.753297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.753326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.753693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.753721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.754100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.754130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.754490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.754520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.754887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.754917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.755275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.755304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.755662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.755691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.756066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.756095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.756465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.756493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.756765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.756795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.757146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.757176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.757554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.757583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.757927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.757959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.758318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.758346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.758711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.758740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.759121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.759150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.759526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.759555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.759925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.759955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.760311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.760341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.760706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.743 [2024-10-30 14:16:07.760734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.743 qpair failed and we were unable to recover it. 00:29:09.743 [2024-10-30 14:16:07.761105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.761135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.761502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.761530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.761868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.761899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.762260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.762290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.762657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.762686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.763043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.763073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.763438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.763466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.763812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.763842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.764139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.764169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.764504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.764533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.764897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.764928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.765286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.765315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.765680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.765709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.766065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.766101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.766369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.766403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.766784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.766815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.767174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.767203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.767593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.767622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.767980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.768010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.768266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.768297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.768657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.768686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.769028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.769058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.769414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.769443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.769674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.769705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.770096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.770126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.770485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.770513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.770871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.770901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.771331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.771361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.771731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.771769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.772007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.772035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.772398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.772426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.772782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.772814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.773199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.773228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.773512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.773541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.773771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.773804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.774038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.774069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.774436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.774465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.774816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.774847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.775219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.744 [2024-10-30 14:16:07.775248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.744 qpair failed and we were unable to recover it. 00:29:09.744 [2024-10-30 14:16:07.775608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.775637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.776021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.776057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.776438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.776468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.776910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.776940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.777303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.777332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.777700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.777728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.778098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.778127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.778363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.778394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.778630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.778659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.779022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.779053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.779435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.779463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.779871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.779900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.780275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.780306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.780639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.780668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.781059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.781090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.781460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.781491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.781864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.781895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.782238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.782268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.782621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.782650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.782987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.783019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.783377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.783407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.783649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.783681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.783926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.783959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.784310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.784339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.784698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.784729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.785119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.785149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.785517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.785547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.785795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.785824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.786262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.786291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.786724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.786767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.787127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.787157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.787413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.787442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.787835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.787865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.788229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.788257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.788440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.788472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.788837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.788875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.789230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.789258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.789622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.789653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.789900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.789933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.745 [2024-10-30 14:16:07.790290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.745 [2024-10-30 14:16:07.790319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.745 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.790680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.790708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.791099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.791140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.791513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.791549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.791875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.791905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.792267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.792297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.792697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.792726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.793102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.793131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.793379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.793411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.793769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.793800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.794166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.794194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.794439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.794467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.794698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.794728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.794984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.795013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.795374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.795405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.795793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.795825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.796247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.796277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.796632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.796663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.797080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.797111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.797462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.797492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.797930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.797960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.798309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.798338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.798689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.798718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.799030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.799061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.799417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.799446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.799807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.799837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.800216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.800244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.800608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.800638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.801010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.801039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.801399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.801428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.801792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.801823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.802178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.802206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.802565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.802594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.746 qpair failed and we were unable to recover it. 00:29:09.746 [2024-10-30 14:16:07.802946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.746 [2024-10-30 14:16:07.802977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.803218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.803247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.803600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.803630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.803989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.804019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.804373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.804403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.804780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.804810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.805192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.805220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.805584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.805614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.805985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.806016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.806378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.806406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.806658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.806689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.807092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.807123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.807487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.807518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.807773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.807804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.808147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.808176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.808543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.808571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.808929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.808962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.809409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.809440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.809789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.809822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.810190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.810220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.810568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.810598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.810940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.810971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.811335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.811365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.811718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.811756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.812117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.812146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.812409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.812438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.812589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.812618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.813004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.813035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.813385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.813415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.813776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.813807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.814048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.814080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.814421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.814451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.814820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.814850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.815230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.815260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.815624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.815653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.815999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.816029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.816399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.816428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.816781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.816810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.817176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.817212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.817563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.817592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.817840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.817869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.747 [2024-10-30 14:16:07.818254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.747 [2024-10-30 14:16:07.818283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.747 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.818516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.818549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.818901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.818931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.819283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.819313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.819660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.819688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.820050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.820080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.820454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.820483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.820824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.820856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.821092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.821123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.821470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.821499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.821862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.821893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.822271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.822300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.822531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.822560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.822915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.822944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.823309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.823338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.823695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.823724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.824094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.824124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.824488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.824517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.824886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.824916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.825277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.825307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.825735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.825784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.826020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.826051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.826425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.826454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.826817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.826847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.827107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.827136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.827491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.827520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.827878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.827908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.828172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.828201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.828577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.828605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.829047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.829077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.829422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.829451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.829795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.829824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.830204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.830232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.830591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.830620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.831003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.831033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.831449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.831480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.831847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.831877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.832243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.832272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.832707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.832741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.833132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.833163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.833413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.748 [2024-10-30 14:16:07.833442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.748 qpair failed and we were unable to recover it. 00:29:09.748 [2024-10-30 14:16:07.833793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.833824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.834178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.834208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.834433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.834463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.834712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.834741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.835137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.835169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.835523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.835554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.835937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.835968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.836322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.836352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.836610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.836640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.836873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.836904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.837256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.837286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.837643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.837673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.838030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.838062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.838301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.838335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.838699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.838729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.839151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.839183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.839540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.839570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.839935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.839965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.840328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.840359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.840721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.840764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.841152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.841183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.841534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.841564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.841922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.841953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.842188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.842218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.842434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.842471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.842694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.842724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.843144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.843174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.843536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.843566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.843807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.843840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.844190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.844220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.844580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.844611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.845064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.845094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.845313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.845344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.845722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.845765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.846158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.846187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.846542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.846573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.846938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.846970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.847293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.847323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.847709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.847740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.848114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.848145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.848371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.848401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.749 qpair failed and we were unable to recover it. 00:29:09.749 [2024-10-30 14:16:07.848680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.749 [2024-10-30 14:16:07.848710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.849162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.849193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.849438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.849470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.849865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.849895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.850251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.850282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.850628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.850658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.851088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.851118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.851497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.851526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.851907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.851936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.852310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.852338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.852698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.852728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.853113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.853143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.853501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.853530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.853890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.853920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.854284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.854313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.854677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.854706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.855095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.855125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.855465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.855494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.855854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.855885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.856250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.856279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.856636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.856664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.857033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.857064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.857365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.857393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.857745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.857787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.858140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.858176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.858552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.858583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.858970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.859000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.859357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.859388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.859722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.859763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.860124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.860154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.860507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.860536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.860880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.860911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.861160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.861190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.861568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.861596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.861991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.862022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.862426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.862454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.862818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.862849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.863184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.863214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.863558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.863587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.863954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.863985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.864218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.864248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.750 qpair failed and we were unable to recover it. 00:29:09.750 [2024-10-30 14:16:07.864603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.750 [2024-10-30 14:16:07.864632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.865050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.865081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.865415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.865444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.865828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.865858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.866216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.866245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.866569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.866599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.866855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.866884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.867125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.867157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.867505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.867542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.867888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.867918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.868292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.868329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.868677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.868707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.869098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.869128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.869558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.869588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.869922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.869953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.870309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.870339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.870700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.870729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.871117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.871146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.871508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.871538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.871943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.871973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.872329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.872359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.872612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.872642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.872996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.873028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.873392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.873421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.873804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.873836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.874211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.874240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.874598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.874628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.874971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.875001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.875346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.875375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.875735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.875776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.876111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.876140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.876503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.876532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.876925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.876955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.877319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.877348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.877703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.877732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.878100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.878131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.751 [2024-10-30 14:16:07.878485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.751 [2024-10-30 14:16:07.878515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.751 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.878869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.878900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.879304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.879334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.879692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.879724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.880038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.880067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.880411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.880441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.880811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.880841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.881222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.881251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.881619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.881648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.882007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.882037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.882403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.882433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.882773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.882804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.883156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.883186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.883547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.883576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.883928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.883957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.884306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.884342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.884693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.884722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.885115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.885145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.885507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.885536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.885909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.885939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.886302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.886331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.886702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.886730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.887126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.887156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.887533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.887562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.887945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.887976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.888339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.888369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.888732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.888772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.889173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.889202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.889583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.889612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.889976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.890007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.890385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.890414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.890772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.890803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.891160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.891188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.891546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.891575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.891943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.891972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.892326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.892354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.892718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.892773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.893147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.893178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.893433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.893461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.893805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.893835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.894208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.894236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.752 [2024-10-30 14:16:07.894593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.752 [2024-10-30 14:16:07.894622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.752 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.894983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.895019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.895354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.895382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.895744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.895785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.896080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.896109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.896489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.896518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.896853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.896882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.897253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.897283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.897655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.897684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.898045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.898076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.898456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.898485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.898852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.898882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.899292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.899321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.899677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.899706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.900052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.900082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.900325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.900354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.900724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.900763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.901132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.901161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.901526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.901555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.901944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.901974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.902216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.902245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.902589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.902617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.902969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.903000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.903339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.903368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.903807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.903837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.904109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.904138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.904517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.904546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.904881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.904910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.905270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.905298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.905744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.905793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.906118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.906148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.906536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.906565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.906921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.906951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.907308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.907337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.907682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.907711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.907948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.907978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.908352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.908383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.908730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.908770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.909136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.909164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.909537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.909565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.909927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.909956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.753 [2024-10-30 14:16:07.910376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.753 [2024-10-30 14:16:07.910405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.753 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.910767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.910804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.911157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.911186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.911558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.911587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.911980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.912019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.912344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.912372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.912715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.912745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.913022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.913051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.913394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.913424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.913797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.913827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.914200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.914229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.914597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.914625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.915015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.915045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.915410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.915439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.915806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.915836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.916216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.916246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.916492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.916521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.916884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.916913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.917289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.917318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.917673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.917702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.918127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.918157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.918514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.918543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.918914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.918943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.919326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.919355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.919594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.919625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.919972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.920002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.920347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.920377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.920724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.920762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.921002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.921033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.921427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.921456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.921698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.921730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.922116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.922146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.922480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.922510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.922854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.922884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.923257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.923285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.923513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.923545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.923917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.923947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.924193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.924224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.924609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.924639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.924986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.925016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.925374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.925405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.754 [2024-10-30 14:16:07.925767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.754 [2024-10-30 14:16:07.925798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.754 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.926170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.926200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.926569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.926598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.926852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.926885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.927255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.927283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.927656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.927685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.928046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.928076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.928436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.928465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.928721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.928761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.929125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.929154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.929513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.929542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.929907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.929936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.930285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.930314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.930681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.930710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.931071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.931102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.931477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.931506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.931844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.931874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.932124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.932153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.932397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.932428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.932771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.932801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.933200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.933230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.933588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.933617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.933885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.933915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.934288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.934316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.934676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.934704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.935135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.935165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.935497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.935526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.935889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.935920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.936276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.936311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.936664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.936692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.937047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.937077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.937415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.937445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.937821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.937852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.938221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.938250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.938607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.938635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.939006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.939036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.755 [2024-10-30 14:16:07.939325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.755 [2024-10-30 14:16:07.939353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.755 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.939699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.939728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.940075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.940105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.940460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.940488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.940865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.940896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.941250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.941280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.941646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.941675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.942041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.942072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.942425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.942453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.942815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.942845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.943209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.943237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.943608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.943637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.943978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.944008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.944377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.944406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.944768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.944798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.945153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.945181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.945534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.945563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.945924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.945956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.946294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.946323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.946685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.946714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.947085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.947114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.947451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.947480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.947860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.947890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.948255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.948283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.948658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.948687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.949044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.949073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.949432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.949461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.949826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.949856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.950208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.950236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.950605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.950635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.951000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.951030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.951392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.951420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.951767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.951798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.952146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.952181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.952536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.952564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.952929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.952958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.953319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.953348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.953721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.953769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.954138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.954169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.954523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.954552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.954909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.954940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.756 [2024-10-30 14:16:07.955194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.756 [2024-10-30 14:16:07.955223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.756 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.955564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.955594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.955964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.955993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.956353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.956382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.956757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.956787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.957128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.957157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.957527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.957556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.957850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.957880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.958243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.958272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.958630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.958659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.959031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.959061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.959420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.959449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.959823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.959852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.960295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.960324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.960678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.960708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.961121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.961151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.961500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.961528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.961888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.961919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.962177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.962208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.962557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.962592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.962946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.962978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.963347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.963375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.963734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.963773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.964131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.964168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.964487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.964516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.964863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.964893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.965197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.965225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.965585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.965614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.966027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.966057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.966499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.966527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.966957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.966988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.967343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.967372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.967738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.967775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.968210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.968240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.968614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.968642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.969005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.969034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.969394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.969424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.969786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.969815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.970169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.970197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.970566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.970594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.970947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.757 [2024-10-30 14:16:07.970978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.757 qpair failed and we were unable to recover it. 00:29:09.757 [2024-10-30 14:16:07.971341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.971369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.971732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.971773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.972190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.972218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.972584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.972613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.972874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.972904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.973264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.973292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.973544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.973572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.973928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.973958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.974353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.974382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.974735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.974776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.975128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.975158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.975380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.975411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.975770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.975800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.976140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.976170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.976540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.976569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.976927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.976958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.977333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.977362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.977721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.977768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.978116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.978145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.978506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.978542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.978894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.978924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.979281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.979310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.979676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.979705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.980076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.980105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.980473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.980501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.980863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.980894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.981253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.981282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.981639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.981667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.982035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.982065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.982422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.982451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.982829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.982860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.983228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.983256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.983616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.983645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.984087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.984118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.984471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.984500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.984713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.984756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.984990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.985022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.985382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.985411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.985769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.985800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.986162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.758 [2024-10-30 14:16:07.986191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.758 qpair failed and we were unable to recover it. 00:29:09.758 [2024-10-30 14:16:07.986535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.986563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.986907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.986939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.987314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.987343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.987713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.987743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.988123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.988152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.988523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.988553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.988912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.988949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.989331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.989359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.989716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.989744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.990115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.990144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.990544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.990574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.990817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.990850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.991230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.991259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.991688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.991717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.992056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.992086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.992439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.992467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.992854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.992884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.993251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.993280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.993640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.993669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.994029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.994060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.994428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.994458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.994821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.994852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.995213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.995242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.995610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.995639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.996008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.996038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.996400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.996429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.996871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.996900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.997229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.997259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.997614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.997643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.997991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.998022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.998461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.998490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.998834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.998865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.999235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.999264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.999625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.999654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:07.999896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.759 [2024-10-30 14:16:07.999929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.759 qpair failed and we were unable to recover it. 00:29:09.759 [2024-10-30 14:16:08.000343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.000372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.000597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.000629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.001015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.001045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.001382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.001412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.001761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.001790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.002131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.002159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.002502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.002531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.002896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.002926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.003309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.003337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.003699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.003729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.004113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.004143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.004443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.004473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.004870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.004907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.005362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.005390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.005794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.005824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.006163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.006193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.006564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.006593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.006976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.007007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.007379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.007407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.007829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.007859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.008220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.008249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.008609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.008638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.008992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.009022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.009265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.009298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.009668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.009697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.010059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.010089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.010451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.010480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.010841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.010872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.011316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.011345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.011575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.011603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.011973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.012004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.012341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.012372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.012758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.012788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.013027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.013056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.013290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.013322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.013681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.013711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.014123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.014153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.014488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.014518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.014866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.014896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.015251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.015286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.760 [2024-10-30 14:16:08.015680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.760 [2024-10-30 14:16:08.015709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.760 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.016111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.016141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.016491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.016520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.016883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.016913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.017347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.017376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.017727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.017765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.018129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.018157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.018521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.018550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.018903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.018934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.019307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.019335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.019696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.019725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.020086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.020115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.020475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.020504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.020901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.020931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.021238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.021275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.021632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.021662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.021998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.022028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.022398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.022427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.022793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.022823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.023211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.023239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.023578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.023608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.023965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.023996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.024353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.024381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.024758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.024788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.025133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.025161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.025411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.025439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.025807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.025837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.026277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.026307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.026688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.026715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.027078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.027108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.027363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.027392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.027770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.027800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.028136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.028166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.028395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.028427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.028804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.028834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.029186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.029215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:09.761 [2024-10-30 14:16:08.029575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.761 [2024-10-30 14:16:08.029604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:09.761 qpair failed and we were unable to recover it. 00:29:10.034 [2024-10-30 14:16:08.030007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-10-30 14:16:08.030040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-10-30 14:16:08.030393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-10-30 14:16:08.030423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-10-30 14:16:08.030799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-10-30 14:16:08.030829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-10-30 14:16:08.031291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-10-30 14:16:08.031327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-10-30 14:16:08.031678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-10-30 14:16:08.031708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-10-30 14:16:08.032053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-10-30 14:16:08.032083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-10-30 14:16:08.032438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-10-30 14:16:08.032468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-10-30 14:16:08.032829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.032860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.033229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.033258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.033626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.033655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.034022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.034052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.034305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.034334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.034669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.034699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.035057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.035087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.035441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.035469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.035824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.035853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.036139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.036168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.036509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.036539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.036902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.036933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.037291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.037319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.037691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.037720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.038078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.038107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.038357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.038385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.038741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.038793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.039145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.039174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.039537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.039565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.039926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.039955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.040310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.040338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.040693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.040723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.041152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.041182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.041549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.041577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.041926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.041956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.042314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.042342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.042706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.042735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.043101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.043130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.043392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.043420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.043813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.043843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.044206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.044234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.044593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.044621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.044988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.045018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.045358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.045387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.045758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.045788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.046146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.046174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.046533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.046562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.046923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.046955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.047312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.047341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.047697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.047725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.048127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-10-30 14:16:08.048157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-10-30 14:16:08.048508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.048536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.048884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.048914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.049274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.049303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.049666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.049695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.050063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.050093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.050456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.050484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.050828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.050858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.051229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.051258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.051621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.051649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.052028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.052058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.052440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.052471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.052832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.052861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.053219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.053248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.053603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.053633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.053973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.054003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.054367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.054396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.054831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.054862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.055300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.055328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.055687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.055716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.056085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.056115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.056477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.056505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.056871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.056900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.057259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.057287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.057528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.057563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.057910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.057941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.058299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.058327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.058686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.058716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.059152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.059183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.059527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.059557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.059890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.059921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.060283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.060311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.060668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.060696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.061059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.061090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.061449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.061478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.061718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.061756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.062126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.062155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.062506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.062535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.062907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.062937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.063293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.063322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.063770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.063800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-10-30 14:16:08.064147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-10-30 14:16:08.064176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.064533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.064562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.064943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.064974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.065347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.065376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.065733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.065771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.066123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.066152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.066543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.066571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.066930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.066960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.067310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.067339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.067699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.067729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.068070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.068099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.068457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.068487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.068851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.068882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.069142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.069170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.069418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.069446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.069804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.069834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.070090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.070118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.070416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.070444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.070800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.070830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.071078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.071110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.071478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.071507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.071868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.071899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.072269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.072298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.072665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.072695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.073056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.073087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.073422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.073452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.073807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.073838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.074240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.074270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.074631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.074660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.075003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.075034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.075402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.075431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.075792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.075823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.076190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.076219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.076370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.076401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.076779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.076810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.077166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.077195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.077559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.077587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.078021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.078052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.078392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.078421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.078777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.078807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.079162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.079191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.079561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-10-30 14:16:08.079590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-10-30 14:16:08.079957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.079987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.080355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.080384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.080742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.080781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.081116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.081145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.081509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.081537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.081901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.081932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.082308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.082337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.082694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.082724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.083103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.083134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.083496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.083531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.083771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.083802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.084148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.084178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.084557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.084586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.085009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.085040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.085292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.085324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.085594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.085623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.085993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.086024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.086378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.086408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.086788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.086819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.087223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.087252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.087614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.087643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.088008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.088037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.088385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.088414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.088762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.088792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.089167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.089196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.089591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.089620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.090072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.090103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.090474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.090504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.090843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.090872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.091229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.091260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.091629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.091657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.091994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.092025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.092402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.092431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.092784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.092814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.093168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.093197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.093557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.093586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.093958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.093987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.094237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.094268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.094622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.094652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.095006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.095037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.095402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-10-30 14:16:08.095432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-10-30 14:16:08.095795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.095825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.096207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.096236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.096608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.096637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.096876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.096905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.097145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.097177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.097525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.097555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.097899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.097930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.098291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.098320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.098684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.098713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.099088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.099125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.099507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.099536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.099896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.099926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.100285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.100314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.100676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.100705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.101070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.101100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.101475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.101504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.101862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.101893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.102262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.102291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.102654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.102683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.103042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.103071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.103428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.103457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.103821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.103851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.104209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.104237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.104597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.104627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.105003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.105033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.105402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.105431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.105801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.105832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.106178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.106208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.106573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.106602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.106966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.106997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.107386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.107415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.107769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.107800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.108167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.108196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.108550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.108579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.108927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-10-30 14:16:08.108959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-10-30 14:16:08.109311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.109348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.109694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.109729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.109995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.110025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.110374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.110404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.110782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.110813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.111197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.111227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.111596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.111626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.111995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.112026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.112355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.112385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.112738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.112778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.113112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.113141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.113499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.113529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.113798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.113827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.114198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.114227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.114588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.114617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.114968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.115004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.115399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.115429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.115674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.115704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.116136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.116165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.116511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.116550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.116974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.117004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.117367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.117397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.117767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.117797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.118056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.118086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.118304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.118332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.118720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.118774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.119183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.119213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.119562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.119592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.119992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.120023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.120412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.120442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.120812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.120843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.121249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.121279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.121631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.121660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.122020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.122050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.122415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.122445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.122812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.122842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.123231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.123261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.123538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.123567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.123926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.123955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.124314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.124343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.124717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-10-30 14:16:08.124758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-10-30 14:16:08.125126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.125155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.125525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.125566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.125930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.125960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.126328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.126357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.126728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.126773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.127140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.127168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.127519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.127548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.127980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.128011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.128427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.128457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.128816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.128846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.129198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.129227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.129591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.129621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.129979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.130008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.130355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.130386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.130630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.130659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.131027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.131058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.131420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.131450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.131805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.131836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.132088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.132115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.132371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.132399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.132661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.132690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.133043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.133072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.133430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.133460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.133818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.133850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.134142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.134170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.134431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.134460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.134814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.134843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.135204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.135235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.135580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.135615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.136047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.136079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.136333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.136365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.136741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.136783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.137159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.137189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.137560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.137588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.137833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.137867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.138121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.138154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.138523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.138551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.138907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.138937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.139303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.139333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.139694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.139724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.140171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-10-30 14:16:08.140201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-10-30 14:16:08.140566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.140595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.140884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.140916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.141257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.141287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.141686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.141714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.141975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.142008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.142381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.142409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.142777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.142808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.143185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.143213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.143441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.143474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.143769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.143798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.144161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.144190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.144552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.144581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.144929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.144959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.145321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.145350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.145705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.145735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.146082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.146111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.146474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.146503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.146863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.146893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.147252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.147281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.147710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.147738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.148099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.148128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.148564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.148593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.148959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.148989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.149250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.149279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.149633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.149662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.150009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.150039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.150449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.150477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.150849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.150879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.151236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.151273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.151644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.151673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.152107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.152136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.152544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.152573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.152926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.152957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.153199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.153230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.153592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.153621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.153883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.153915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.154290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.154319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.154614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.154642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.155011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.155041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.155383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.155413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.155778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.155808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-10-30 14:16:08.156190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-10-30 14:16:08.156219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.156579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.156608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.156989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.157021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.157403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.157432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.157788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.157818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.158205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.158234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.158576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.158605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.158949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.158980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.159346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.159375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.159741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.159782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.160130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.160159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.160520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.160548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.160907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.160938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.161298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.161327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.161689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.161724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.162081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.162110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.162471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.162501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.162855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.162886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.163242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.163271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.163635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.163664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.163996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.164025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.164376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.164404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.164769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.164800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.165158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.165187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.165546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.165576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.165947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.165977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.166347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.166376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.166732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.166778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.167162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.167191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.167560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.167589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.167977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.168007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.168358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.168387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.168769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.168799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.169148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.169177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.169518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.169547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.169926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-10-30 14:16:08.169956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-10-30 14:16:08.170315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.170343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.170709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.170738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.171102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.171131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.171490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.171520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.171880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.171910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.172275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.172304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.172672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.172701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.173060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.173089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.173449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.173478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.173820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.173851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.174209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.174238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.174613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.174642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.174986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.175017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.175395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.175424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.175793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.175824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.176188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.176217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.176562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.176592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.176925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.176954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.177324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.177353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.177712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.177757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.178141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.178170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.178459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.178488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.178894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.178927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.179280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.179309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.179669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.179699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.180077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.180108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.180476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.180504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.180856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.180886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.181160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.181189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.181444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.181473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.181826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.181856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.182220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.182249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.182620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.182649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.182991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.183022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.183396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.183426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.183786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.183816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.184198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.184227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.184606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.184634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.185006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.185035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.185389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.185418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.185782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.185813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-10-30 14:16:08.186162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-10-30 14:16:08.186192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.186546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.186575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.186928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.186958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.187207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.187236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.187571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.187602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.187966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.187996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.188364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.188393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.188761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.188790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.189150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.189178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.189541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.189569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.190059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.190089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.190445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.190474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.190838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.190869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.191222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.191252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.191585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.191614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.191979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.192008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.192376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.192404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.192759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.192790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.193128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.193158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.193527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.193557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.193816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.193845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.194258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.194286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.194646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.194675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.195055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.195085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.195432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.195461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.195813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.195844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.196213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.196241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.196600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.196631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.196918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.196948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.197310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.197339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.197707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.197736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.198002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.198031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.198404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.198432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.198796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.198828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.199187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.199215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.199580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.199609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.199850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.199883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.200238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.200268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.200670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.200698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.200925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.200957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.201341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.201370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-10-30 14:16:08.201725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-10-30 14:16:08.201763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.202122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.202150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.202523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.202551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.202943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.202974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.203322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.203351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.203591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.203631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.204003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.204033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.204397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.204425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.204781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.204810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.205180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.205208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.205586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.205615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.205986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.206015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.206351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.206381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.206743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.206802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.207076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.207107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.207501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.207529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.207943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.207974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.208319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.208348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.208584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.208617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.208984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.209014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.209425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.209453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.209811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.209842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.210215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.210244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.210607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.210636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.211010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.211040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.211391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.211419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.211786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.211817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.212197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.212226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.212589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.212618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.212989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.213018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.213259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.213288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.213632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.213662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.214013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.214042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.214401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.214430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.214792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.214822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.215164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.215194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.215582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.215610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.215956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.215987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.216352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.216381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.216828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.216857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.217144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.217176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.217572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-10-30 14:16:08.217600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-10-30 14:16:08.217828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.217860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.218264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.218293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.218700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.218729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.219144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.219174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.219527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.219561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.219915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.219946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.220312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.220341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.220576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.220608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.220985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.221015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.221384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.221413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.221755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.221785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.222030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.222060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.222429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.222458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.222817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.222848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.223266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.223295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.223646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.223675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.224041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.224073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.224433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.224463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.224822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.224852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.225196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.225225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.225592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.225622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.225970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.225999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.226212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.226242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.226688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.226716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.227085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.227116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.227475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.227503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.227848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.227879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.228246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.228275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.228639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.228667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.229035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.229066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.229406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.229436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.229786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.229823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.230126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.230155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.230506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.230535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.230877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.230906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.231244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.231273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-10-30 14:16:08.231641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-10-30 14:16:08.231670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.232031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.232060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.232425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.232454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.232792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.232820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.233094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.233123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.233535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.233564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.233909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.233947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.234291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.234320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.234679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.234708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.235153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.235185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.235617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.235646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.235917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.235947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.236312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.236340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.236773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.236803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.237168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.237196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.237553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.237581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.237976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.238006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.238356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.238385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.238758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.238789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.239122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.239151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.239510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.239538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.239895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.239925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.240210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.240238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.240604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.240633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.240989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.241021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.241372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.241400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.241639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.241667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.242034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.242064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.242426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.242454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.242821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.242851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.243206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.243234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.243639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.243668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.244016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.244047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.244425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.244453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.244814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.244845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.245218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.245247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.245607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.245642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.246005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.246035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.246393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.246421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.246787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.246817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.247212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.247240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-10-30 14:16:08.247597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-10-30 14:16:08.247626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.247984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.248015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.248413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.248443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.248693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.248721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.249127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.249157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.249509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.249538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.249902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.249931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.250297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.250326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.250690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.250719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.251092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.251122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.251488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.251517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.251878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.251908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.252248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.252277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.252644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.252672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.252945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.252974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.253361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.253390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.253744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.253785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.254125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.254154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.254515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.254544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.254967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.254997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.255361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.255390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.255644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.255673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.256022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.256059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.256401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.256430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.256805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.256835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.257205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.257234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.257602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.257632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.257922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.257952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.258323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.258352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.258712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.258740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.259109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.259139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.259502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.259531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.259896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.259926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.260285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.260316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.260688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.260717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.261088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.261117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.261473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.261502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.261863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.261894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.262254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.262284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.262645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.262675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.263060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.263092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.263389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.049 [2024-10-30 14:16:08.263426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.049 qpair failed and we were unable to recover it. 00:29:10.049 [2024-10-30 14:16:08.263779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.263811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.264161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.264190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.264449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.264479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.264859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.264889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.265253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.265282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.265643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.265672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.266040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.266071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.266427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.266457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.266805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.266836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.267241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.267270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.267622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.267653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.268003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.268034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.268394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.268423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.268789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.268819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.269184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.269213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.269579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.269608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.269980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.270009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.270342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.270372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.270734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.270782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.271133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.271162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.271521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.271550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.271908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.271944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.272283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.272314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.272676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.272704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.273073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.273103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.273465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.273494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.273873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.273903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.274248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.274277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.274654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.274684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.275032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.275063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.275428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.275457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.275825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.275856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.276230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.276259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.276630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.276659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.276998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.277028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.277407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.277436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.277794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.277824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.278196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.278225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.278586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.278616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.278945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.278975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.279344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.279374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.050 qpair failed and we were unable to recover it. 00:29:10.050 [2024-10-30 14:16:08.279686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.050 [2024-10-30 14:16:08.279717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.280085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.280115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.280482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.280513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.280872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.280904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.281279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.281308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.281669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.281697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.282063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.282094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.282452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.282492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.282845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.282876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.283271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.283300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.283604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.283634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.283987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.284017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.284373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.284403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.284653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.284681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.285040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.285071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.285428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.285457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.285832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.285863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.286225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.286255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.286626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.286654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.286999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.287032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.287377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.287406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.287713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.287744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.288115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.288145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.288511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.288540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.288807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.288837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.289202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.289232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.289580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.289611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.290049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.290079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.290322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.290350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.290697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.290727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.291100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.291130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.291388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.291420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.291763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.291794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.292141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.292170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.292418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.292446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.292801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.292833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.051 [2024-10-30 14:16:08.293180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.051 [2024-10-30 14:16:08.293210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.051 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.293541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.293570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.293972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.294003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.294358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.294388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.294764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.294795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.295151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.295180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.295549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.295578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.296055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.296086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.296444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.296472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.296818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.296848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.297095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.297125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.297479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.297509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.297865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.297902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.298271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.298300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.298668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.298699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.298945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.298977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.299331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.299360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.299715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.299745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.300100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.300129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.300382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.300410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.300771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.300802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.301198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.301228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.301592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.301622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.301876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.301906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.302293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.302322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.302686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.302715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.303079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.303110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.303514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.303543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.303893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.303923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.304176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.304206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.304546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.304575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.304919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.304949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.305307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.305336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.305690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.305718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.306072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.306101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.306528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.306558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.306794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.306826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.307177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.307205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.307574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.307603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.307970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.308001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.052 qpair failed and we were unable to recover it. 00:29:10.052 [2024-10-30 14:16:08.308399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.052 [2024-10-30 14:16:08.308428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.308805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.308835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.309213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.309242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.309606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.309635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.309995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.310026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.310393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.310421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.310835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.310865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.311095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.311126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.311478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.311507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.311934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.311964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.312323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.312353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.312726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.312768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.313098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.313128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.313497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.313527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.313887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.313917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.314268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.314296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.314670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.314699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.315067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.315097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.315457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.315485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.315864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.315895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.316255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.316283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.316649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.316678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.317047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.317077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.317466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.317495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.317856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.317887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.318097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.318129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.318494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.318522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.318887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.318917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.319178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.319206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.319571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.319600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.319971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.320001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.320231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.320263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.320617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.320646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.320994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.321024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.321436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.321465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.053 [2024-10-30 14:16:08.321721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.053 [2024-10-30 14:16:08.321766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.053 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.322129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.322161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.322524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.322556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.322923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.322953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.323319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.323348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.323604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.323639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.324009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.324038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.324399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.324427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.324791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.324820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.325259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.325287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.325642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.325673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.326023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.326053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.326286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.326318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.326671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.326700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.327117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.327147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.327504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.327533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.327891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.327922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.328273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.328302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.328561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.328589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.328949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.328979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.329341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.329371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.329732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.329771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.330151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.330180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.330540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.330570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.330928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.330958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.331318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.331348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.331709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.326 [2024-10-30 14:16:08.331737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.326 qpair failed and we were unable to recover it. 00:29:10.326 [2024-10-30 14:16:08.332095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.332126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.332492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.332521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.332890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.332921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.333289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.333319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.333653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.333684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.334046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.334076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.334473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.334503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.334866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.334896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.335263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.335291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.335662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.335690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.336072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.336102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.336468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.336499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.336882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.336913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.337262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.337293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.337631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.337660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.337999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.338031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.338395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.338425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.338789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.338822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.339216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.339246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.339504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.339537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.339918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.339950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.340313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.340343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.340715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.340745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.341148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.341178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.341521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.341550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.341898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.341929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.342281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.342310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.342694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.342724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.343036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.343066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.343401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.343431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.343813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.343843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.344124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.344153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.344490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.344521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.344885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.344916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.345287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.345317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.345678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.345707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.346080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.346109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.346476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.346505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.346888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.346921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.347284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.347313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.327 [2024-10-30 14:16:08.347678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.327 [2024-10-30 14:16:08.347708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.327 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.348089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.348118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.348484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.348514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.348874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.348905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.349267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.349296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.349665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.349694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.350066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.350102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.350453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.350482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.350856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.350888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.351213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.351241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.351493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.351521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.351875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.351907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.352273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.352302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.352682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.352711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.353010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.353040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.353467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.353496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.353871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.353900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.354258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.354289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.354647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.354676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.355046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.355076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.355440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.355470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.355842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.355873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.356227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.356256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.356628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.356658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.357016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.357046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.357386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.357414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.357806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.357836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.358191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.358220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.358604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.358633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.358903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.358933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.359296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.359325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.359692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.359723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.360081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.360112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.360456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.360495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.360797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.360828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.361190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.361219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.361615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.361644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.361992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.362030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.362375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.362405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.362771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.362802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.363164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.363192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.328 [2024-10-30 14:16:08.363554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.328 [2024-10-30 14:16:08.363583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.328 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.363932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.363962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.364323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.364352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.364717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.364757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.365149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.365178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.365420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.365449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.365807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.365844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.366220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.366250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.366608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.366636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.366909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.366939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.367284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.367314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.367675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.367704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.368079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.368109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.368470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.368500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.368868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.368898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.369267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.369296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.369664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.369693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.369962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.369992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.370232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.370264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.370612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.370647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.370994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.371025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.371384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.371413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.371770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.371808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.372195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.372224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.372588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.372617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.372960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.372992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.373357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.373387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.373757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.373787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.374169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.374199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.374625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.374656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.374999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.375029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.375399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.375429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.375788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.375818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.376196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.376232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.376496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.376525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.376940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.376970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.377337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.377366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.377769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.377799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.378136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.378166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.378536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.378564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.378928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.378959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.329 [2024-10-30 14:16:08.379323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.329 [2024-10-30 14:16:08.379352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.329 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.379710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.379739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.380187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.380216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.380573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.380601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.380854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.380885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.381246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.381276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.381656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.381686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.382049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.382080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.382430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.382459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.382821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.382852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.383222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.383251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.383616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.383646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.384000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.384030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.384394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.384423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.384783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.384814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.385234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.385263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.385620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.385649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.386021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.386051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.386427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.386455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.386823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.386853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.387206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.387234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.387602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.387630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.387974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.388005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.388338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.388367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.388730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.388771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.389136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.389165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.389534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.389562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.389892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.389923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.390298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.390327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.390586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.390614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.390988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.391018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.391375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.391404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.391778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.391807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.392163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.392198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.392556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.392586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-10-30 14:16:08.392926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-10-30 14:16:08.392956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.393301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.393337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.393668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.393697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.394107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.394136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.394496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.394525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.394855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.394885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.395228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.395258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.395606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.395635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.395988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.396019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.396393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.396421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.396660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.396691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.397074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.397104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.397466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.397495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.397864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.397893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.398271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.398300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.398660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.398690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.399093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.399123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.399497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.399526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.399867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.399897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.400251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.400280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.400642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.400671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.401032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.401062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.401418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.401448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.401808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.401838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.402117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.402145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.402485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.402520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.402876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.402905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.403268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.403297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.403674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.403702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.404131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.404161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.404525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.404554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.404903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.404934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.405304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.405333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.405692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.405720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.406090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.406120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.406377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.406408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.406780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.406810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.407200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.407229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.407483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.407511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.407868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.407899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.408260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.408290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-10-30 14:16:08.408647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-10-30 14:16:08.408676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.409044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.409075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.409443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.409471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.409898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.409927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.410172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.410201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.410634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.410663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.411015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.411045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.411401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.411430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.411794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.411825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.412182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.412210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.412583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.412611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.412980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.413011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.413380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.413410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.413775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.413805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.414188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.414217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.414575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.414604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.414977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.415007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.415383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.415411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.415773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.415805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.416155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.416184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.416545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.416573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.416949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.416979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.417337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.417365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.417721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.417758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.418101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.418130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.418489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.418524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.418881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.418911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.419166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.419197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.419562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.419590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.419954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.419985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.420342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.420370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.420713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.420742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.421189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.421218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.421547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.421577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.421967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.421997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.422353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.422381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.422740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.422785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.423136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.423164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.423535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.423563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.423814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.423844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.424094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.424122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-10-30 14:16:08.424473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-10-30 14:16:08.424502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.424842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.424873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.425240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.425268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.425613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.425642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.426002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.426032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.426394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.426424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.426780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.426809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.427136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.427164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.427511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.427540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.427895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.427925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.428319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.428347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.428672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.428708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.429101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.429131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.429492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.429520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.429897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.429927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.430286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.430316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.430684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.430712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.430965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.430994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.431353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.431382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.431727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.431764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.432120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.432148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.432399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.432428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.432820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.432850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.433191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.433220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.433620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.433648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.433986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.434018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.434378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.434406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.434725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.434764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.435121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.435150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.435514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.435543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.435903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.435932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.436300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.436328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.436587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.436615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.436944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.436974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.437338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.437367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.437726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.437766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.438016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.438044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.438392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.438420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.438669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.438699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.439100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.439131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.439498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.439528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-10-30 14:16:08.439886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-10-30 14:16:08.439916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.440350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.440378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.440731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.440768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.441121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.441150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.441511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.441540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.441964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.441994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.442353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.442381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.442733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.442785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.443127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.443156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.443516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.443546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.443904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.443935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.444298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.444333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.444767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.444798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.445148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.445177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.445534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.445562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.446050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.446079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.446498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.446527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.446876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.446907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.447270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.447298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.447660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.447689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.448052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.448082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.448428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.448456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.448806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.448837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.449193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.449221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.449572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.449601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.449942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.449972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.450318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.450347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.450585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.450613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.450987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.451016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.451352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.451382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.451764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.451795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.452184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.452212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.452542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.452570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.452944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.452974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.453280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.453310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.453685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.453713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.454094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-10-30 14:16:08.454124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-10-30 14:16:08.454490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.454518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.454882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.454912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.455345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.455375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.455626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.455654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.456001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.456031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.456401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.456430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.456794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.456825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.457195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.457224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.457567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.457596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.457976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.458007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.458335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.458364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.458771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.458800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.459142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.459172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.459571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.459601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.459958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.459988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.460364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.460393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.460835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.460866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.461221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.461250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.461588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.461617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.461966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.461997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.462354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.462383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.462754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.462783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.463091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.463119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.463479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.463508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.463863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.463894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.464238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.464266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.464627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.464656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.465025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.465057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.465403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.465432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.465793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.465824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.466191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.466220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.466578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.466607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.466976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.467006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.467335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.467364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.467738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.467776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.468135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.468163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.468535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.468565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.468924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.468954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.469312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.469341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.469698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-10-30 14:16:08.469727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-10-30 14:16:08.470095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.470124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.470386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.470415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.470793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.470829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.471055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.471085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.471447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.471476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.471860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.471890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.472258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.472286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.472644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.472672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.473042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.473073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.473437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.473464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.473814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.473844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.474106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.474134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.474483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.474511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.474867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.474898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.475263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.475292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.475645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.475673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.476045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.476076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.476445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.476474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.476849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.476878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.477231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.477259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.477505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.477537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.477851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.477881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.478260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.478289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.478648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.478676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.479074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.479103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.479450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.479479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.479825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.479854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.480232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.480261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.480614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.480642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.480995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.481025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.481263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.481292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.481660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.481689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.482048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.482078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.482434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.482462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.482811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.482841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.483218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.483246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.483592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.483621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.483988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.484018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.484383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.484411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.484770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.484800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.485155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.485185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-10-30 14:16:08.485555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-10-30 14:16:08.485584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.485989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.486019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.486269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.486307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.486682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.486711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.487078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.487109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.487465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.487493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.487864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.487894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.488261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.488290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.488648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.488677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.489043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.489073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.489435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.489463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.489830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.489860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.490218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.490246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.490603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.490632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.491002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.491032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.491379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.491408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.491784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.491814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.492218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.492247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.492612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.492641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.493022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.493055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.493433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.493462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.493822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.493854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.494230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.494259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.494622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.494654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.495004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.495034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.495388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.495416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.495772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.495801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.496173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.496201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.496548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.496576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.496940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.496976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.497322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.497349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.497601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.497629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.498004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.498033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.498296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.498323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.498726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.498767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.499151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.499179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.499582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.499610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.499942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.499971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.500326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.500353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.500703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.500731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.501076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.501105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-10-30 14:16:08.501482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-10-30 14:16:08.501511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.501867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.501899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.502269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.502300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.502647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.502678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.503043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.503074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.503429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.503460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.503825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.503856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.504235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.504265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.504508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.504538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.504907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.504937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.505299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.505329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.505689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.505720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.506072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.506103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.506460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.506490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.506763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.506795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.507174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.507204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.507558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.507588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.507963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.507995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.508350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.508381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.508741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.508784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.508966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.508995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.509366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.509396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.509627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.509660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.510020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.510051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.510435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.510464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.510824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.510855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.511219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.511249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.511646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.511677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.512032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.512064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.512428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.512465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.512826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.512857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.513212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.513242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.513606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.513636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.513989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.514021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.514367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.514397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.514857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.514888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.515180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.515209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.515461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.515493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.515885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.515917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.516281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.516310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.516685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.516715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-10-30 14:16:08.517122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-10-30 14:16:08.517153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.517594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.517624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.518116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.518146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.518486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.518516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.518882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.518912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.519275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.519304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.519652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.519681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.520033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.520065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.520409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.520438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.520808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.520838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.521221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.521250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.521612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.521641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.522052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.522083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.522415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.522445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.522811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.522841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.523069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.523105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.523452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.523481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.523855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.523886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.524264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.524294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.524510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.524538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.524999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.525029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.525369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.525399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.525796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.525828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.526198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.526226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.526606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.526635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.526994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.527025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.527406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.527436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.527814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.527846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.528253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.528281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.528716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.528757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.529097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.529127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.529477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.529506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.529865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.529897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.530277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.530307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.530555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.530584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.530973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.531003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.531360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.531390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-10-30 14:16:08.531662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-10-30 14:16:08.531692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.531960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.531990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.532344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.532373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.532768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.532799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.533153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.533181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.533540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.533569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.533948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.533979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.534324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.534353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.534715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.534743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.535082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.535112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.535464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.535493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.535734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.535774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.536127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.536155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.536513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.536542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.536909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.536937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.537348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.537377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.537728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.537769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.538102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.538131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.538555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.538585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.538954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.538991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.539348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.539377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.539789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.539820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.540195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.540224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.540647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.540677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.541029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.541060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.541416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.541445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.541805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.541835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.542211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.542239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.542599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.542629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.543006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.543036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.543403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.543431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.543859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.543890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.544240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.544269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.544626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.544656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.545015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.545045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.545447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.545477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.545828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.545858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.546233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.546261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.546668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.546698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.546975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.547004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.547340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-10-30 14:16:08.547369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-10-30 14:16:08.547722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.547764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.548127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.548155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.548527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.548555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.548931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.548960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.549387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.549415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.549763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.549799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.550152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.550181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.550544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.550573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.551015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.551044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.551403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.551431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.551778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.551808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.552188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.552218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.552577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.552607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.552976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.553007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.553367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.553396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.553767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.553795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.554173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.554202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.554558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.554588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.554933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.554963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.555323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.555352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.555714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.555744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.556132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.556161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.556418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.556446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.556796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.556826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.557194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.557222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.557583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.557611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.557881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.557910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.558265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.558294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.558663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.558691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.559106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.559135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.559479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.559508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.559867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.559896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.560315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.560343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.560692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.560730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.561125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.561156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.561446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.561475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.561832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.561862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.562220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.562248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.562478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.562507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.562941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.562971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.563332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.563361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-10-30 14:16:08.563726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-10-30 14:16:08.563763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.564109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.564139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.564509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.564538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.564941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.564971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.565311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.565340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.565711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.565759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.566125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.566155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.566509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.566538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.566969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.567000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.567247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.567278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.567624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.567653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.568000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.568032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.568384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.568412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.568821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.568852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.569138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.569167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.569544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.569574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.569952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.569982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.570318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.570348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.570684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.570712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.571148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.571178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.571533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.571562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.571925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.571954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.572327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.572355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.572708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.572738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.573119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.573149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.573513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.573542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.573909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.573938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.574298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.574328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.574681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.574709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.574954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.574987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.575341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.575369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.575724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.575764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.576150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.576180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.576544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.576573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.577010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.577040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.577376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.577405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.577777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.577807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.578050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.578078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.578418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.578446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.578708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.578737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.579132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.579161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-10-30 14:16:08.579509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-10-30 14:16:08.579538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.579898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.579928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.580297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.580327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.580680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.580708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.581048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.581079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.581519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.581549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.581891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.581921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.582293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.582322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.582693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.582723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.583091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.583120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.583479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.583509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.583941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.583971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.584326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.584356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.584711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.584741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.585164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.585194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.585438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.585466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.585814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.585846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.586258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.586287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.586651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.586680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.586946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.586976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.587296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.587326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.587723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.587763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.588163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.588192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.588553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.588582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.588925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.588956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.589401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.589431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.589787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.589818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.590169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.590198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.590562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.590592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.590967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.590996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.591402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.591431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.591795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.591824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.592105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.592140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.592499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.592529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.592889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.592919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.593276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.593306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.593661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-10-30 14:16:08.593691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-10-30 14:16:08.594032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.594063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.594424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.594453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.594816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.594848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.595227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.595256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.595630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.595659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.596012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.596042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.596283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.596312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.596643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.596673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.597034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.597064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.597425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.597455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.597852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.597882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.598237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.598266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.598674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.598702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.599073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.599103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.599443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.599471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.599824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.599855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.600229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.600259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.600615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.600642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.600935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.600965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.601331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.601360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.601720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.601758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.602114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.602143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.602504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.602534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.602906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.602936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.603198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.603226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.603592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.603621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.603970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.604001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.604261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.604289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.604647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.604676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.605007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.605038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.605394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.605423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.605662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.605691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.606058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.606088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.606444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.606473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.606840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.606869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.607215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.607243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.607591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.607620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.607980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.608011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.608373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.608402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.608769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.608799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.609163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-10-30 14:16:08.609191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-10-30 14:16:08.609543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-10-30 14:16:08.609571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-10-30 14:16:08.609944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-10-30 14:16:08.609976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-10-30 14:16:08.610323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-10-30 14:16:08.610351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-10-30 14:16:08.610702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-10-30 14:16:08.610730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-10-30 14:16:08.611071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-10-30 14:16:08.611100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-10-30 14:16:08.611464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-10-30 14:16:08.611493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-10-30 14:16:08.611870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-10-30 14:16:08.611900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-10-30 14:16:08.612273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-10-30 14:16:08.612302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-10-30 14:16:08.612650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-10-30 14:16:08.612678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-10-30 14:16:08.613045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-10-30 14:16:08.613075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-10-30 14:16:08.613430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-10-30 14:16:08.613460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-10-30 14:16:08.613811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-10-30 14:16:08.613841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-10-30 14:16:08.614209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-10-30 14:16:08.614238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.614490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-10-30 14:16:08.614521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.618 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.614894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-10-30 14:16:08.614924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.618 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.615330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-10-30 14:16:08.615359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.618 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.615720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-10-30 14:16:08.615762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.618 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.616019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-10-30 14:16:08.616051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.618 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.616422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-10-30 14:16:08.616450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.618 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.616850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-10-30 14:16:08.616881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.618 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.617229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-10-30 14:16:08.617258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.618 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.617632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-10-30 14:16:08.617660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.618 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.618010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-10-30 14:16:08.618046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.618 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.618410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-10-30 14:16:08.618439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.618 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.618790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-10-30 14:16:08.618820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.618 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.619164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-10-30 14:16:08.619193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.618 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.619551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-10-30 14:16:08.619582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.618 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.619924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-10-30 14:16:08.619953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.618 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.620327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-10-30 14:16:08.620356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.618 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.620725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-10-30 14:16:08.620796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.618 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.621202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-10-30 14:16:08.621231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.618 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.621581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-10-30 14:16:08.621609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.618 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.621974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-10-30 14:16:08.622004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.618 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.622341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-10-30 14:16:08.622370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.618 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.622732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-10-30 14:16:08.622771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.618 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.623024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-10-30 14:16:08.623055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.618 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.623435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-10-30 14:16:08.623464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.618 qpair failed and we were unable to recover it. 00:29:10.618 [2024-10-30 14:16:08.623718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.623756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.624122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.624151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.624523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.624552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.624901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.624931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.625286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.625315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.625680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.625708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.626057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.626087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.626441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.626470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.626827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.626858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.627225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.627253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.627608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.627636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.627977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.628006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.628365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.628395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.628781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.628811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.629171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.629200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.629558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.629587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.629953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.629982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.630340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.630370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.630617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.630646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.631011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.631040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.631388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.631417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.631789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.631819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.632183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.632211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.632579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.632607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.632968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.632999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.633329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.633358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.633711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.633757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.634123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.634154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.634525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.634553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.634918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.634949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.635287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.635316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.635677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.635706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.636066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.636095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.636455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.636484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.636856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.636886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.637236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.637265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.637635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.637663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.638007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.638038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.638372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.638400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.638770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.638800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.639158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-10-30 14:16:08.639187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.619 qpair failed and we were unable to recover it. 00:29:10.619 [2024-10-30 14:16:08.639551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.639580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.639948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.639978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.640351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.640379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.640779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.640809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.641142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.641172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.641540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.641569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.641933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.641964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.642326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.642356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.642716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.642745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.643171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.643200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.643540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.643568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.643878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.643909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.644260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.644297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.644646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.644675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.645014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.645044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.645408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.645437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.645802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.645831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.646202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.646230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.646617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.646646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.646974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.647004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.647438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.647466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.647875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.647906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.648272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.648302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.648632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.648660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.648910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.648939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.649299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.649328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.649677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.649705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.650099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.650128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.650477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.650506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.650873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.650903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.651250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.651279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.651650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.651679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.652022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.652051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.652402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.652430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.652795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.652826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.653102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.653131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.653481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.653511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.653864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.653894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.654139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.654167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.654534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.654562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.655018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.655048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.620 qpair failed and we were unable to recover it. 00:29:10.620 [2024-10-30 14:16:08.655386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-10-30 14:16:08.655415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.655762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.655792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.656065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.656092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.656458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.656486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.656850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.656882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.657324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.657352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.657711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.657739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.658152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.658181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.658416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.658444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.658889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.658920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.659247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.659276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.659637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.659665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.660039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.660075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.660318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.660353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.660677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.660706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.661057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.661089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.661451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.661479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.661844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.661874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.662223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.662252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.662629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.662657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.663000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.663031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.663266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.663297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.663644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.663672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.664028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.664059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.664417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.664445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.664810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.664839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.665205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.665234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.665603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.665631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.666005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.666034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.666395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.666424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.666791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.666820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.667103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.667131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.667384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.667414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.667766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.667796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.668155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.668183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.668423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.668455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.668833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.668863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.669245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.669274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.669630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.669658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.670023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.670061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.670416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.670445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.621 qpair failed and we were unable to recover it. 00:29:10.621 [2024-10-30 14:16:08.670790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.621 [2024-10-30 14:16:08.670820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.671166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.671195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.671441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.671470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.671707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.671739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.672095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.672125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.672492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.672521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.672895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.672927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.673159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.673191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.673615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.673644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.673943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.673974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.674344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.674373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.674606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.674637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.675008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.675039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.675398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.675427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.675843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.675873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.676237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.676265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.676635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.676664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.677003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.677033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.677393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.677422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.677785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.677816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.678198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.678226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.678586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.678616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.678971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.679000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.679364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.679392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.679767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.679797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.680072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.680100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.680460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.680489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.680859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.680889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.681257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.681286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.681536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.681564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.681918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.681948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.682308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.682337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.682697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.682725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.683168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.683197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.683437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.683468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-30 14:16:08.683825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.622 [2024-10-30 14:16:08.683856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.684216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.684245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.684603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.684631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.684971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.685002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.685336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.685371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.685714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.685743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.686082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.686112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.686449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.686478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.686838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.686869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.687234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.687263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.687618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.687646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.688016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.688045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.688408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.688437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.688792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.688822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.689177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.689206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.689557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.689585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.689924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.689953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.690315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.690343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.690703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.690732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.691116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.691145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.691491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.691519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.691864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.691895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.692266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.692295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.692655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.692685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.693039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.693068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.693425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.693454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.693828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.693857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.694222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.694250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.694626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.694655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.694940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.694969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.695333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.695362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.695725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.695766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.696149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.696178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.696533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.696562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.696849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.696878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.697247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.697276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.697640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.697669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.698035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.698064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.698425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.698453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.698821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.698851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.699218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.623 [2024-10-30 14:16:08.699246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-30 14:16:08.699604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.699633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.700006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.700036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.700373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.700402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.700770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.700801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.701163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.701192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.701560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.701588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.701925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.701954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.702320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.702350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.702703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.702732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.703065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.703094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.703465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.703495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.703832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.703862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.704216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.704244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.704623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.704652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.705019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.705050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.705408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.705436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.705787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.705817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.706225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.706254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.706591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.706620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.706973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.707003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.707367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.707396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.707827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.707856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.708218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.708247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.708608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.708637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.708911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.708940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.709294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.709323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.709698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.709730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.710126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.710155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.710517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.710546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.710918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.710948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.711313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.711342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.711704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.711739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.712107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.712137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.712495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.712526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.712890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.712921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.713284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.713313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.713768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.713799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.714146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.714177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.714510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.714538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.714844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.714873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.715237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.715265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-30 14:16:08.715630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.624 [2024-10-30 14:16:08.715658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.716019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.716048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.716299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.716328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.716664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.716693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.717074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.717104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.717366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.717394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.717769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.717800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.718181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.718210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.718561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.718590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.718970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.719001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.719341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.719371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.719741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.719798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.720166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.720195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.720562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.720590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.720935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.720964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.721234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.721263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.721623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.721652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.721998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.722028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.722385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.722415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.722775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.722806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.723165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.723193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.723552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.723581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.723968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.723997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.724377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.724406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.724658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.724686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.725042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.725072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.725607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.725643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.726003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.726038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.726395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.726424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.726811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.726841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.727183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.727213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.727565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.727594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.727964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.727995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.728356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.728385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.728743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.728801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.729187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.729216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.729591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.729621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.729976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.730006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.730357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.730387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.730735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.730777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.731133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.731163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.625 [2024-10-30 14:16:08.731523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.625 [2024-10-30 14:16:08.731552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.625 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.731920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.731951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.732307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.732336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.732686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.732717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.733129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.733159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.733520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.733549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.733918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.733948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.734304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.734333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.734691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.734719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.735093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.735123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.735479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.735507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.735871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.735901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.736277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.736306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.736668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.736696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.737056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.737085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.737442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.737471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.737832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.737861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.738230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.738265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.738633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.738662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.739037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.739066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.739401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.739430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.739682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.739711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.740089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.740119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.740490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.740518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.740866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.740897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.741282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.741312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.741646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.741675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.741989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.742021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.742372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.742401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.742765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.742796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.743146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.743174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.743518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.743547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.743914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.743947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.744283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.744312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.744668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.744699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.745143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.745173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.745469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.745498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.626 [2024-10-30 14:16:08.745853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-10-30 14:16:08.745883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.626 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.746240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.746268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.746655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.746684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.747059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.747090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.747495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.747523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.747881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.747911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.748267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.748298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.748661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.748689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.749062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.749093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.749337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.749368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.749720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.749762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.750096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.750125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.750491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.750519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.750879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.750909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.751286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.751314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.751678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.751708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.752124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.752155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.752518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.752547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.752920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.752949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.753299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.753328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.753685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.753715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.754089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.754132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.754468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.754497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.754769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.754801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.755164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.755194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.755534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.755564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.755905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.755938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.756317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.756347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.756726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.756768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.757018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.757050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.757406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.757436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.757812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.757843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.758211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.758240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.758613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.758643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.759024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.759054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.759452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.759481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.759849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.759880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.760234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.760263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.760634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.760662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.761029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.761058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.627 [2024-10-30 14:16:08.761420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-10-30 14:16:08.761450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.627 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.761780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.761809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.762164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.762194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.762552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.762582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.762923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.762954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.763319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.763348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.763710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.763738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.764119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.764149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.764518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.764554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.764903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.764933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.765302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.765332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.765691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.765720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.766153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.766183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.766542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.766570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.766945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.766975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.767340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.767369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.767730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.767769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.768128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.768157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.768538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.768567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.768913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.768947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.769326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.769356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.769719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.769760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.770139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.770171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.770523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.770555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.770911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.770942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.771101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.771131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.771486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.771516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.771788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.771817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.772166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.772194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.772561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.772591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.772971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.773002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.773345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.773374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.773740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.773782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.774136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.774165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.774513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.774543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.774902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.774933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.775295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.775325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.775690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.775719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.776084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.776123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.776451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.776481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.776848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.776880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.628 [2024-10-30 14:16:08.777157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-10-30 14:16:08.777186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.628 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.777442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.777471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.777834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.777865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.778230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.778260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.778558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.778595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.778953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.778983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.779340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.779369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.779759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.779791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.780176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.780212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.780566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.780595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.780970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.781000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.781338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.781368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.781725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.781766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.782068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.782096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.782467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.782498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.782864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.782897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.783263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.783294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.783669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.783697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.784056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.784085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.784438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.784471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.784826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.784857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.785244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.785272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.785633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.785664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.786019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.786050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.786292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.786321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.786706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.786735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.787097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.787128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.787485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.787514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.787820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.787850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.788089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.788121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.788475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.788504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.788813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.788843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.789183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.789212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.789570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.789598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.789974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.790003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.790363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.790398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.790762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.790795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.791113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.791142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.791391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.791420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.791786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.791816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.792086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.792114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.792468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.629 [2024-10-30 14:16:08.792497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.629 qpair failed and we were unable to recover it. 00:29:10.629 [2024-10-30 14:16:08.792933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.792963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.793324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.793355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.793718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.793758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.794115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.794143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.794509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.794538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.794906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.794937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.795310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.795338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.795708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.795740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.796079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.796108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.796474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.796503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.796863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.796894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.797278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.797308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.797672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.797700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.797949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.797983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.798367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.798396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.798771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.798801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.799154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.799184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.799416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.799447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.799671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.799705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.800084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.800114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.800490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.800521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.800916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.800948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.801326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.801355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.801710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.801739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.802119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.802147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.802382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.802414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.802803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.802833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.803200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.803227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.803582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.803612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.803979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.804009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.804237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.804268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.804627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.804656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.804835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.804869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.805286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.805315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.805657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.805693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.806082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.806115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.806511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.806539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.806886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.806924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.807326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.807356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.807589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.807618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.630 qpair failed and we were unable to recover it. 00:29:10.630 [2024-10-30 14:16:08.807983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.630 [2024-10-30 14:16:08.808014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.808251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.808282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.808627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.808656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.809004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.809035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.809474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.809503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.809848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.809877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.810255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.810284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.810618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.810646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.811000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.811031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.811456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.811486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.811839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.811869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.812221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.812256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.812599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.812628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.812983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.813015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.813346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.813375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.813706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.813733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.814076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.814106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.814450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.814479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.814856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.814885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.815128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.815159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.815510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.815548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.815906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.815943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.816320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.816349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.816699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.816729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.817117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.817147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.817514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.817544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.817809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.817840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.818210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.818238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.818583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.818614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.819011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.819042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1210547 Killed "${NVMF_APP[@]}" "$@" 00:29:10.631 [2024-10-30 14:16:08.819374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.819406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 [2024-10-30 14:16:08.819768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.819798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 14:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:10.631 [2024-10-30 14:16:08.820109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.820140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 14:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:10.631 [2024-10-30 14:16:08.820491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.820522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 14:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:10.631 [2024-10-30 14:16:08.820889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.631 [2024-10-30 14:16:08.820923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.631 qpair failed and we were unable to recover it. 00:29:10.631 14:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:10.632 14:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.632 [2024-10-30 14:16:08.821309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.821340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.821600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.821633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.823464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.823524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.823934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.823967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.824366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.824395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.824725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.824768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.825150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.825180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.825554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.825583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.825943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.825977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.827044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.827093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.827475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.827507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.827873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.827904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.828220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.828249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.828601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.828631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.829012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.829043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.829385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.829413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 14:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1211394 00:29:10.632 [2024-10-30 14:16:08.829771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.829804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.829966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.829995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 14:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1211394 00:29:10.632 [2024-10-30 14:16:08.830327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.830356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 14:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1211394 ']' 00:29:10.632 14:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:10.632 14:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.632 [2024-10-30 14:16:08.830738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.830806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 14:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:10.632 [2024-10-30 14:16:08.831172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.831204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 14:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.632 14:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:10.632 [2024-10-30 14:16:08.831591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.831623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 14:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.632 [2024-10-30 14:16:08.831973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.832006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.832263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.832294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.832664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.832694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.832981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.833014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.833403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.833433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.833790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.833821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.834177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.834207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.834582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.834612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.834963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.834995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.835378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.835410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.835805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.835836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.836103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.836141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.836499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.836535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.836939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.632 [2024-10-30 14:16:08.836970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.632 qpair failed and we were unable to recover it. 00:29:10.632 [2024-10-30 14:16:08.837371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.837403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.837771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.837802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.838160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.838188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.838588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.838620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.838870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.838900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.839265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.839295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.839640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.839671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.840078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.840110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.840460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.840488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.840819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.840850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.841159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.841193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.841577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.841608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.842034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.842066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.842436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.842464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.842827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.842862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.843210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.843245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.843611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.843641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.844048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.844080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.844394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.844424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.844781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.844812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.845085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.845113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.845388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.845417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.845777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.845806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.846137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.846167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.846565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.846595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.846893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.846924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.847298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.847328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.847701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.847732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.848154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.848186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.848466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.848496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.848871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.848902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.849284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.849316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.849659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.849688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.850089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.850119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.850478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.850508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.850865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.850896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.851277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.851310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.851618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.851649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.852033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.852070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.852398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.852428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.852786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.852818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.633 qpair failed and we were unable to recover it. 00:29:10.633 [2024-10-30 14:16:08.853190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.633 [2024-10-30 14:16:08.853221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.853565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.853596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.853972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.854004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.854328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.854359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.854628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.854661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.855005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.855037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.855392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.855422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.855765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.855796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.856176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.856206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.856552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.856582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.856961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.856992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.857359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.857391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.857768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.857799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.858181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.858212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.858559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.858589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.858967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.858997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.859262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.859290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.859637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.859668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.860057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.860088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.860452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.860482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.860825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.860856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.861244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.861273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.861622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.861660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.861991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.862022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.862364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.862404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.862761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.862792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.863150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.863181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.863535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.863564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.863885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.863914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.864274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.864304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.864521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.864550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.864911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.864943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.865224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.865255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.865527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.865561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.865827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.865858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.866260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.866291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.866671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.866702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.867095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.867127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.867497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.867528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.867771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.867802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.868152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.868182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.634 [2024-10-30 14:16:08.868419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.634 [2024-10-30 14:16:08.868449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.634 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.868833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.868863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.869239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.869269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.869620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.869650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.869900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.869929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.870306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.870335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.870692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.870720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.871112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.871143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.871474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.871502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.871870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.871900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.872277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.872306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.872680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.872708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.873096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.873126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.873495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.873525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.874042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.874073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.874486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.874514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.874762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.874791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.875145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.875173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.875610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.875638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.876054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.876085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.876209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.876238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.876460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.876489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.876827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.876857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.877113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.877144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.877513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.877549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.877930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.877963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.878196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.878225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.878654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.878686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.879032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.879064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.879462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.879492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.879763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.879794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.880234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.880264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.880645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.880676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.880936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.880967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.881222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.881250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.881485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.881519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.881738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.635 [2024-10-30 14:16:08.881787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.635 qpair failed and we were unable to recover it. 00:29:10.635 [2024-10-30 14:16:08.882165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.882194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.882461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.882490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.882878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.882911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.883154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.883182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.883552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.883580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.883811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.883840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.884096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.884129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.884511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.884539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.884779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.884808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.885197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.885228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.885495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.885523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.885718] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:29:10.636 [2024-10-30 14:16:08.885790] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.636 [2024-10-30 14:16:08.885884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.885915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.886156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.886184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.886399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.886433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.886676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.886704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.887096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.887128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.887555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.887584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.887853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.887884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.888116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.888145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.888512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.888544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.888917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.888949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.889318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.889347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.889630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.889659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.890028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.890058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.890348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.890377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.890761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.890791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.891045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.891074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.891443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.891474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.891837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.891868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.892218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.892246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.892595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.892625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.893032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.893063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.893423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.893452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.893875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.893906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.894263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.894299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.894694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.894723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.895124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.895154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.895529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.895557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.895817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.895847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.896271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.636 [2024-10-30 14:16:08.896299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.636 qpair failed and we were unable to recover it. 00:29:10.636 [2024-10-30 14:16:08.896562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.896598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.896864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.896895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.897263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.897291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.897664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.897695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.898138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.898171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.898420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.898449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.898813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.898843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.899216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.899244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.899615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.899645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.900001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.900033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.900394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.900426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.900666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.900694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.901088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.901118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.901359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.901387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.901792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.901823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.901949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.901977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.902295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.902325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.902696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.902724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.903123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.903153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.903497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.903526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.903897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.903929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.904204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.904234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.904605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.904634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.905046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.905077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.905480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.905509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.637 [2024-10-30 14:16:08.905804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.637 [2024-10-30 14:16:08.905834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.637 qpair failed and we were unable to recover it. 00:29:10.912 [2024-10-30 14:16:08.906185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.912 [2024-10-30 14:16:08.906216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.912 qpair failed and we were unable to recover it. 00:29:10.912 [2024-10-30 14:16:08.906470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.912 [2024-10-30 14:16:08.906504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.912 qpair failed and we were unable to recover it. 00:29:10.912 [2024-10-30 14:16:08.906766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.912 [2024-10-30 14:16:08.906797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.912 qpair failed and we were unable to recover it. 00:29:10.912 [2024-10-30 14:16:08.907160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.912 [2024-10-30 14:16:08.907189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.912 qpair failed and we were unable to recover it. 00:29:10.912 [2024-10-30 14:16:08.907550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.912 [2024-10-30 14:16:08.907580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.912 qpair failed and we were unable to recover it. 00:29:10.912 [2024-10-30 14:16:08.908082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.912 [2024-10-30 14:16:08.908113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.912 qpair failed and we were unable to recover it. 00:29:10.912 [2024-10-30 14:16:08.908474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.912 [2024-10-30 14:16:08.908504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.912 qpair failed and we were unable to recover it. 00:29:10.912 [2024-10-30 14:16:08.908802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.912 [2024-10-30 14:16:08.908832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.912 qpair failed and we were unable to recover it. 00:29:10.912 [2024-10-30 14:16:08.909173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.912 [2024-10-30 14:16:08.909202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.912 qpair failed and we were unable to recover it. 00:29:10.912 [2024-10-30 14:16:08.909562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.912 [2024-10-30 14:16:08.909592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.912 qpair failed and we were unable to recover it. 00:29:10.912 [2024-10-30 14:16:08.909950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.912 [2024-10-30 14:16:08.909980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.912 qpair failed and we were unable to recover it. 00:29:10.912 [2024-10-30 14:16:08.910386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.912 [2024-10-30 14:16:08.910415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.912 qpair failed and we were unable to recover it. 00:29:10.912 [2024-10-30 14:16:08.910767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.912 [2024-10-30 14:16:08.910797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.912 qpair failed and we were unable to recover it. 00:29:10.912 [2024-10-30 14:16:08.911171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.912 [2024-10-30 14:16:08.911200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.912 qpair failed and we were unable to recover it. 00:29:10.912 [2024-10-30 14:16:08.911567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.912 [2024-10-30 14:16:08.911598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.912 qpair failed and we were unable to recover it. 00:29:10.912 [2024-10-30 14:16:08.912040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.912 [2024-10-30 14:16:08.912080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.912 qpair failed and we were unable to recover it. 00:29:10.912 [2024-10-30 14:16:08.912431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.912 [2024-10-30 14:16:08.912461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.912 qpair failed and we were unable to recover it. 00:29:10.912 [2024-10-30 14:16:08.912822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.912 [2024-10-30 14:16:08.912852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.912 qpair failed and we were unable to recover it. 00:29:10.912 [2024-10-30 14:16:08.913226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.912 [2024-10-30 14:16:08.913255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.912 qpair failed and we were unable to recover it. 00:29:10.912 [2024-10-30 14:16:08.913523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.912 [2024-10-30 14:16:08.913553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.912 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.913896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.913926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.914302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.914334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.914699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.914728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.915175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.915205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.915562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.915593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.915867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.915898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.916300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.916330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.916685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.916715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.917095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.917127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.917488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.917518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.917782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.917812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.918193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.918222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.918479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.918508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.918788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.918818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.919208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.919237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.919601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.919631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.920060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.920091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.920446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.920476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.920828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.920859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.921133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.921161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.921550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.921578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.921828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.921857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.922346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.922382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.922632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.922661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.922881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.922911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.923287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.923316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.923602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.923631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.923900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.923930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.924282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.924311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.924672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.924702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.925070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.925101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.925459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.925490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.925883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.925913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.926263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.926293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.926697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.926726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.927125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.927158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.927506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.927538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.927909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.927940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.928320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.928351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.928715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.928769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.913 [2024-10-30 14:16:08.929107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.913 [2024-10-30 14:16:08.929138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.913 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.929486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.929515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.929926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.929963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.930330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.930359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.930718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.930782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.931138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.931166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.931537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.931565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.931913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.931945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.932292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.932321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.932687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.932716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.933145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.933176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.933543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.933573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.933825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.933855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.934245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.934273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.934522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.934552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.934810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.934839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.935111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.935139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.935474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.935505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.935873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.935905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.936247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.936275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.936628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.936656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.937113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.937143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.937425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.937455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.937772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.937809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.938148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.938177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.938432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.938462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.938807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.938836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.939226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.939255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.939613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.939643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.940005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.940035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.940380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.940410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.940738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.940780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.942718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.942801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.943121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.943157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.943402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.943431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.943778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.943811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.944183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.944213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.944464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.944498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.944842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.944872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.945248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.945277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.945645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.945675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.914 qpair failed and we were unable to recover it. 00:29:10.914 [2024-10-30 14:16:08.946033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.914 [2024-10-30 14:16:08.946065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.946416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.946447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.946818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.946849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.947105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.947136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.947477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.947510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.947858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.947888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.948219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.948250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.948608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.948637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.948987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.949016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.949246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.949276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.949512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.949541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.949888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.949919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.950267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.950299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.950627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.950657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.951082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.951114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.951467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.951496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.951857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.951888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.952234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.952265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.952610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.952642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.953053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.953085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.953431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.953461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.953855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.953885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.954204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.954233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.954616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.954648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.955031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.955063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.955307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.955337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.955691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.955721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.956090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.956120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.956477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.956506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.956867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.956898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.957259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.957288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.957630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.957660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.957920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.957955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.958319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.958348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.958730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.958772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.959140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.959169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.959428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.959457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.959823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.959855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.960197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.960226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.960571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.960601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.960992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.961023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.915 [2024-10-30 14:16:08.961382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.915 [2024-10-30 14:16:08.961413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.915 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.961735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.961779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.962146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.962175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.962529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.962560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.962925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.962956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.963296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.963326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.963657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.963686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.964034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.964065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.964420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.964451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.964820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.964858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.965226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.965256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.965619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.965649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.966062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.966092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.966438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.966468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.966833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.966864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.967239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.967268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.967413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.967446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.967856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.967887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.968143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.968176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.968408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.968445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.968810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.968844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.969230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.969261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.969604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.969633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.969981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.970012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.970418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.970449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.970795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.970827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.971211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.971242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.971609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.971640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.971984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.972017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.972228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.972258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.972397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.972426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.972820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.972851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.973230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.973260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.973624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.973655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.974000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.916 [2024-10-30 14:16:08.974032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.916 qpair failed and we were unable to recover it. 00:29:10.916 [2024-10-30 14:16:08.974378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.974408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.974774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.974805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.975142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.975171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.975516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.975547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.975888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.975920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.976283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.976313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.976549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.976577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.976959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.976991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.977362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.977392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.977843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.977876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.978249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.978278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.978491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.978521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.978867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.978898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.979236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.979267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.979498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.979531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.979937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.979972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.980208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.980242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.980595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.980626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.980971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.981003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.981252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.981285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.981638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.981668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.982054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.982085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.982342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.982375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.982713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.982743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.983131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.983162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.983383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.983411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.983796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.983827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.984051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.984081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.984422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.984451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.984807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.984842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.985112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.985142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.985491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.985528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.985870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.985901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.986250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.986280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.986607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.986638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.986977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.987007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.987332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.987364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.987723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.987768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.988119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.988149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.988508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.988538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.988892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.988925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.917 [2024-10-30 14:16:08.989237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.917 [2024-10-30 14:16:08.989267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.917 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.989318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:10.918 [2024-10-30 14:16:08.989532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.989563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.989925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.989956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.990322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.990352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.990711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.990741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.991144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.991174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.991512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.991542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.991949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.991981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.992323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.992353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.992719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.992763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.993125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.993156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.993513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.993544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.993907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.993939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.994298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.994328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.994696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.994726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.995099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.995130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.995504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.995532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.995889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.995919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.996280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.996310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.996673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.996702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.996949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.996981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.997366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.997395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.997770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.997801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.998149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.998179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.998489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.998519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.998786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.998817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.999104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.999132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.999481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.999510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:08.999866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:08.999904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:09.000234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:09.000264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:09.000619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:09.000648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:09.001013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:09.001044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:09.001311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:09.001341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:09.001681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:09.001711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:09.002106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:09.002136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:09.002491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:09.002522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:09.002881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:09.002913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:09.003233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:09.003263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:09.003652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:09.003682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:09.004041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:09.004071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:09.004426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:09.004454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.918 qpair failed and we were unable to recover it. 00:29:10.918 [2024-10-30 14:16:09.004826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.918 [2024-10-30 14:16:09.004857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.005246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.005276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.005632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.005662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.006080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.006110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.006455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.006483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.006868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.006900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.007257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.007287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.007664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.007695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.008040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.008071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.008411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.008441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.008812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.008843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.009203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.009233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.009590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.009618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.009976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.010009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.010392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.010423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.010782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.010813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.011177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.011205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.011565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.011594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.011926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.011956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.012314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.012342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.012727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.012769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.013133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.013164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.013504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.013534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.013901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.013932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.014287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.014316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.014671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.014698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.015063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.015094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.015432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.015462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.015833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.015865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.016240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.016269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.016633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.016664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.017026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.017056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.017407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.017436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.017792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.017821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.018199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.018229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.018493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.018525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.018883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.018914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.019265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.019295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.019670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.019701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.020064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.020094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.020438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.020468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.919 qpair failed and we were unable to recover it. 00:29:10.919 [2024-10-30 14:16:09.020824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.919 [2024-10-30 14:16:09.020856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.021119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.021149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.021498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.021528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.021867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.021899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.022269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.022299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.022661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.022689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.023064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.023094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.023469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.023498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.023870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.023900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.024254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.024284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.024647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.024678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.025027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.025059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.025400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.025429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.025782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.025812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.026153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.026189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.026557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.026586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.026977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.027009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.027255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.027284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.027652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.027683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.028025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.028055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.028424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.028453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.028807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.028838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.029216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.029246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.029589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.029619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.029871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.029905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.030245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.030275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.030602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.030630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.030973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.031004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.031378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.031407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.031775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.031805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.032040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.032071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.032437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.032466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.032809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.032847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.033212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.033243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.033609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.033638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.034008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.034037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.034397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.034426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.034788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.034819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.035206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.035236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.035499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.035527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.035886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.920 [2024-10-30 14:16:09.035916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.920 qpair failed and we were unable to recover it. 00:29:10.920 [2024-10-30 14:16:09.036177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.036209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.036574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.036603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.036983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.037014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.037366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.037396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.037769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.037799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.038158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.038186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.038557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.038586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.038758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.038788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.039025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.039054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.039453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.039485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.039837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.039869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.040117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.040148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.040506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.040535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.040897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.040926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.041299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.041331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.041742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.041796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.042141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.042172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.042539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.042569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.042939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.042969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.043227] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.921 [2024-10-30 14:16:09.043272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.921 [2024-10-30 14:16:09.043281] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:10.921 [2024-10-30 14:16:09.043289] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:10.921 [2024-10-30 14:16:09.043295] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.921 [2024-10-30 14:16:09.043348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.043377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.043736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.043775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.044023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.044054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.044439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.044468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.044827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.044856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.045219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.045248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.045619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.045648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with[2024-10-30 14:16:09.045530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:10.921 addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.045698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:10.921 [2024-10-30 14:16:09.045907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:10.921 [2024-10-30 14:16:09.045908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:10.921 [2024-10-30 14:16:09.046069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.046098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.046458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.046488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.046740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.046783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.047125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.047154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.047559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.047588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.047821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.047854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.048242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.921 [2024-10-30 14:16:09.048271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.921 qpair failed and we were unable to recover it. 00:29:10.921 [2024-10-30 14:16:09.048530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.048560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.048904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.048936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.049319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.049348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.049612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.049640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.049878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.049915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.050201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.050231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.050595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.050626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.050998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.051028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.051397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.051426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.051806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.051837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.052202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.052231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.052456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.052484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.052824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.052855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.053181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.053212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.053577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.053606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.053992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.054023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.054382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.054410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.054764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.054794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.055159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.055187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.055547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.055578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.055929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.055959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.056323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.056353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.056716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.056745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.057127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.057157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.057545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.057574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.057912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.057943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.058321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.058350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.058697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.058726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.059115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.059145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.059503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.059532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.059908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.059939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.060299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.060328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.060687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.060724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.061116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.061146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.061378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.061407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.061772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.061803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.062156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.062184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.062576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.062606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.062993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.063025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.063398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.063427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.063788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.922 [2024-10-30 14:16:09.063819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.922 qpair failed and we were unable to recover it. 00:29:10.922 [2024-10-30 14:16:09.064176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.064204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.064565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.064595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.064950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.064980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.065254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.065282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.065643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.065672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.065925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.065955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.066327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.066356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.066709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.066740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.067082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.067113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.067478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.067507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.067810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.067842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.068203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.068233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.068605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.068635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.068978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.069010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.069359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.069388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.069760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.069791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.070157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.070187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.070552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.070580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.070972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.071001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.071371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.071403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.071769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.071801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.072031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.072060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.072436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.072465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.072824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.072854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.073215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.073244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.073609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.073640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.074002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.074031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.074410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.074439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.074893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.074927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.075306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.075336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.075701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.075731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.076063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.076093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.076473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.076503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.076864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.076896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.077258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.077288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.077647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.077677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.078041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.078072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.078331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.078361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.078704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.078736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.079109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.079139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.079485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.923 [2024-10-30 14:16:09.079515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.923 qpair failed and we were unable to recover it. 00:29:10.923 [2024-10-30 14:16:09.079870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.079901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.080250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.080280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.080653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.080684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.081076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.081108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.081473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.081504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.081783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.081816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.082069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.082101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.082470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.082500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.082855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.082887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.083256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.083287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.083696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.083725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.084083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.084112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.084478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.084510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.084775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.084806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.085165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.085195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.085410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.085439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.085813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.085844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.086227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.086259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.086603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.086640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.086871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.086903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.087273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.087304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.087683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.087714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.088076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.088107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.088478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.088508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.088866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.088897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.089247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.089277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.089414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.089445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.089785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.089817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.090208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.090239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.090590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.090621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.090805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.090835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.091071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.091100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.091463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.091494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.091874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.091904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.092269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.092298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.092667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.092696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.093071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.093102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.093480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.093509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.093859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.093892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.094232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.094261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.094613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.924 [2024-10-30 14:16:09.094643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.924 qpair failed and we were unable to recover it. 00:29:10.924 [2024-10-30 14:16:09.095012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.095044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.095369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.095397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.095770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.095802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.096029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.096058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.096416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.096447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.096823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.096855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.097215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.097243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.097600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.097630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.098013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.098044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.098417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.098447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.098804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.098835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.099048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.099077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.099442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.099471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.099844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.099876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.100252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.100281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.100637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.100669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.100947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.100978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.101364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.101392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.101635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.101670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.102037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.102068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.102306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.102335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.102695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.102725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.103095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.103126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.103494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.103524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.103891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.103923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.104286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.104315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.104550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.104580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.105000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.105032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.105372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.105401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.105671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.105701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.106070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.106102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.106465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.106495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.106708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.106738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.107025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.107059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.107301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.107331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.925 [2024-10-30 14:16:09.107673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.925 [2024-10-30 14:16:09.107703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.925 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.108091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.108123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.108504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.108534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.108870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.108901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.109142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.109170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.109452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.109480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.109814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.109845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.109985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.110015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.110236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.110266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.110501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.110530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.110909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.110947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.111296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.111324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.111700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.111730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.111991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.112022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.112238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.112267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.112641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.112669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.113062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.113093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.113350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.113382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.113762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.113793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.114047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.114076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.114457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.114486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.114843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.114874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.115264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.115292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.115673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.115703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.116123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.116153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.116507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.116538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.116788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.116820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.117159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.117187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.117514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.117545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.117888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.117918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.118280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.118309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.118667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.118698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.118968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.119002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.119346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.119374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.119489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.119519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.119870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.119901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.120135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.120164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.120545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.120575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.120996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.121028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.121262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.121293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.121655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.121684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.122101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.926 [2024-10-30 14:16:09.122132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.926 qpair failed and we were unable to recover it. 00:29:10.926 [2024-10-30 14:16:09.122464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.122492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.122731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.122772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.123176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.123206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.123446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.123475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.123792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.123822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.124152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.124181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.124399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.124427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.124779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.124810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.125168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.125205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.125407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.125444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.125786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.125817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.126085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.126113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.126529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.126557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.126791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.126819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.127193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.127222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.127589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.127619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.127850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.127881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.128252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.128281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.128507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.128535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.128868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.128899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.129148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.129179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.129441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.129470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.129809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.129840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.130184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.130215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.130389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.130418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.130633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.130661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.131037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.131067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.131295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.131326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.131679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.131709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.131937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.131966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.132222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.132252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.132477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.132506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.132732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.132775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.133182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.133211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.133577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.133605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.133835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.133865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.134126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.134164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.134399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.134429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.134811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.134842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.135194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.135224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.927 [2024-10-30 14:16:09.135614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.927 [2024-10-30 14:16:09.135642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.927 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.135909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.135938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.136278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.136306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.136675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.136706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.137080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.137111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.137482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.137511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.137888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.137919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.138139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.138170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.138538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.138569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.138902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.138931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.139310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.139340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.139697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.139726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.140118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.140149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.140486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.140514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.140769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.140800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.141029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.141058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.141424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.141453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.141807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.141838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.142187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.142216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.142569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.142597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.142987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.143017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.143280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.143309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.143656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.143685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.144054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.144084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.144443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.144472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.144822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.144852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.145124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.145151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.145487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.145516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.145877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.145909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.146276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.146305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.146671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.146699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.147055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.147084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.147293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.147321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.147701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.147731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.147931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.147960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.148210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.148242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.148571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.148600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.148966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.149005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.149362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.149391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.149771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.149803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.150146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.150173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.928 [2024-10-30 14:16:09.150560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.928 [2024-10-30 14:16:09.150589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.928 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.150953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.150984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.151351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.151379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.151724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.151763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.152119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.152148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.152510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.152541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.152903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.152932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.153309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.153337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.153713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.153742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.154139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.154169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.154526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.154557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.154916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.154946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.155215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.155243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.155597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.155627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.156011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.156042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.156409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.156438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.156810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.156840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.157213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.157242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.157605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.157633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.158002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.158034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.158396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.158425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.158794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.158825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.159188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.159217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.159578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.159614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.159971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.160001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.160337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.160366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.160727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.160767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.161144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.161173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.161532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.161560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.161931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.161962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.162221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.162250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.162490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.162518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.162742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.162786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.163153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.163182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.163541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.163571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.163931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.163961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.164283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.929 [2024-10-30 14:16:09.164313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.929 qpair failed and we were unable to recover it. 00:29:10.929 [2024-10-30 14:16:09.164679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.164709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.165084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.165113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.165475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.165507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.165763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.165795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.166136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.166165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.166535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.166565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.166803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.166834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.167101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.167132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.167501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.167530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.167895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.167927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.168303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.168333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.168696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.168726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.169109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.169138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.169504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.169534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.169914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.169944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.170304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.170334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.170685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.170714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.171077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.171107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.171477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.171508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.171852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.171883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.172258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.172289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.172643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.172671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.173043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.173075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.173416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.173445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.173833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.173863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.174239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.174268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.174629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.174657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.175020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.175057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.175439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.175469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.175825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.175856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.176226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.176261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.176625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.176654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.177012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.177042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.177390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.177420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.177789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.177819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.178211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.178239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.178588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.178618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.178967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.178997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.179371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.179400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.179744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.179788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.180232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.930 [2024-10-30 14:16:09.180261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.930 qpair failed and we were unable to recover it. 00:29:10.930 [2024-10-30 14:16:09.180631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.180661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.180894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.180925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.181146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.181174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.181552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.181582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.181695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.181727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.181971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.182002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.182360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.182389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.182624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.182653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.183030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.183061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.183296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.183325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.183607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.183636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.183861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.183890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.184137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.184166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.184411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.184441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.184633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.184664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.185063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.185093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.185328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.185356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.185719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.185761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.186102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.186132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.186359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.186387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.186646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.186675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.186918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.186950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.187190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.187219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.187477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.187510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.187877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.187908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.188261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.188291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.188625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.188654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.189031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.189062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.189346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.189375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.189617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.189646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.189914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.189944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.190165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.190195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.190441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.190473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.190705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.190733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.191130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.191160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.191532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.191561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.191786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.191816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.192068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.192097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.192242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.192271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.192512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.192544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.192968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.931 [2024-10-30 14:16:09.192999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.931 qpair failed and we were unable to recover it. 00:29:10.931 [2024-10-30 14:16:09.193247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.932 [2024-10-30 14:16:09.193276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.932 qpair failed and we were unable to recover it. 00:29:10.932 [2024-10-30 14:16:09.193522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.932 [2024-10-30 14:16:09.193551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.932 qpair failed and we were unable to recover it. 00:29:10.932 [2024-10-30 14:16:09.193878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.932 [2024-10-30 14:16:09.193907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.932 qpair failed and we were unable to recover it. 00:29:10.932 [2024-10-30 14:16:09.194146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.932 [2024-10-30 14:16:09.194176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.932 qpair failed and we were unable to recover it. 00:29:10.932 [2024-10-30 14:16:09.194552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.932 [2024-10-30 14:16:09.194581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.932 qpair failed and we were unable to recover it. 00:29:10.932 [2024-10-30 14:16:09.194936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.932 [2024-10-30 14:16:09.194967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.932 qpair failed and we were unable to recover it. 00:29:10.932 [2024-10-30 14:16:09.195317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.932 [2024-10-30 14:16:09.195347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.932 qpair failed and we were unable to recover it. 00:29:10.932 [2024-10-30 14:16:09.195718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.932 [2024-10-30 14:16:09.195759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.932 qpair failed and we were unable to recover it. 00:29:10.932 [2024-10-30 14:16:09.196076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.932 [2024-10-30 14:16:09.196106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.932 qpair failed and we were unable to recover it. 00:29:10.932 [2024-10-30 14:16:09.196428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.932 [2024-10-30 14:16:09.196459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.932 qpair failed and we were unable to recover it. 00:29:10.932 [2024-10-30 14:16:09.196833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.932 [2024-10-30 14:16:09.196864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.932 qpair failed and we were unable to recover it. 00:29:10.932 [2024-10-30 14:16:09.197238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.932 [2024-10-30 14:16:09.197270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.932 qpair failed and we were unable to recover it. 00:29:10.932 [2024-10-30 14:16:09.197485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.932 [2024-10-30 14:16:09.197514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.932 qpair failed and we were unable to recover it. 00:29:10.932 [2024-10-30 14:16:09.197775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.932 [2024-10-30 14:16:09.197811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.932 qpair failed and we were unable to recover it. 00:29:10.932 [2024-10-30 14:16:09.198033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.932 [2024-10-30 14:16:09.198063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.932 qpair failed and we were unable to recover it. 00:29:10.932 [2024-10-30 14:16:09.198434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.932 [2024-10-30 14:16:09.198463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.932 qpair failed and we were unable to recover it. 00:29:10.932 [2024-10-30 14:16:09.198824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.932 [2024-10-30 14:16:09.198855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:10.932 qpair failed and we were unable to recover it. 00:29:11.206 [2024-10-30 14:16:09.199227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.206 [2024-10-30 14:16:09.199258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.206 qpair failed and we were unable to recover it. 00:29:11.206 [2024-10-30 14:16:09.199630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.206 [2024-10-30 14:16:09.199661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.206 qpair failed and we were unable to recover it. 00:29:11.206 [2024-10-30 14:16:09.200035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.206 [2024-10-30 14:16:09.200065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.206 qpair failed and we were unable to recover it. 00:29:11.206 [2024-10-30 14:16:09.200404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.206 [2024-10-30 14:16:09.200433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.206 qpair failed and we were unable to recover it. 00:29:11.206 [2024-10-30 14:16:09.200658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.206 [2024-10-30 14:16:09.200686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.206 qpair failed and we were unable to recover it. 00:29:11.206 [2024-10-30 14:16:09.200946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.206 [2024-10-30 14:16:09.200977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.206 qpair failed and we were unable to recover it. 00:29:11.206 [2024-10-30 14:16:09.201303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.206 [2024-10-30 14:16:09.201331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.206 qpair failed and we were unable to recover it. 00:29:11.206 [2024-10-30 14:16:09.201708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.206 [2024-10-30 14:16:09.201738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.206 qpair failed and we were unable to recover it. 00:29:11.206 [2024-10-30 14:16:09.202109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.206 [2024-10-30 14:16:09.202139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.206 qpair failed and we were unable to recover it. 00:29:11.206 [2024-10-30 14:16:09.202514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.206 [2024-10-30 14:16:09.202543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.206 qpair failed and we were unable to recover it. 00:29:11.206 [2024-10-30 14:16:09.202891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.206 [2024-10-30 14:16:09.202923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.206 qpair failed and we were unable to recover it. 00:29:11.206 [2024-10-30 14:16:09.203312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.206 [2024-10-30 14:16:09.203341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.206 qpair failed and we were unable to recover it. 00:29:11.206 [2024-10-30 14:16:09.203696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.206 [2024-10-30 14:16:09.203726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.206 qpair failed and we were unable to recover it. 00:29:11.206 [2024-10-30 14:16:09.203989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.206 [2024-10-30 14:16:09.204018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.206 qpair failed and we were unable to recover it. 00:29:11.206 [2024-10-30 14:16:09.204249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.206 [2024-10-30 14:16:09.204279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.206 qpair failed and we were unable to recover it. 00:29:11.206 [2024-10-30 14:16:09.204526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.206 [2024-10-30 14:16:09.204556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.206 qpair failed and we were unable to recover it. 00:29:11.206 [2024-10-30 14:16:09.204956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.206 [2024-10-30 14:16:09.204986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.206 qpair failed and we were unable to recover it. 00:29:11.206 [2024-10-30 14:16:09.205331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.206 [2024-10-30 14:16:09.205361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.206 qpair failed and we were unable to recover it. 00:29:11.206 [2024-10-30 14:16:09.205648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.206 [2024-10-30 14:16:09.205678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.206 qpair failed and we were unable to recover it. 00:29:11.206 [2024-10-30 14:16:09.206006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.206 [2024-10-30 14:16:09.206038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.206 qpair failed and we were unable to recover it. 00:29:11.206 [2024-10-30 14:16:09.206412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.206 [2024-10-30 14:16:09.206440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.206 qpair failed and we were unable to recover it. 00:29:11.206 [2024-10-30 14:16:09.206652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.206681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.207053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.207084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.207434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.207464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.207849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.207879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.208262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.208292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.208646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.208675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.209055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.209085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.209305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.209335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.209681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.209710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.210089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.210120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.210504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.210535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.210879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.210918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.211297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.211327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.211686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.211715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.211885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.211916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.212150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.212179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.212531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.212561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.212797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.212827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.213236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.213265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.213612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.213642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.213981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.214011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.214352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.214381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.214599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.214630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.215003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.215034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.215396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.215426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.215794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.215824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.216045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.216073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.216298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.216326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.216717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.216756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.217131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.217162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.217397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.217426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.217795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.217825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.218019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.218047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.218390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.218420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.218790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.218821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.219150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.219188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.219516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.219548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.219895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.219924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.220347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.220377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.220604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.220632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.207 [2024-10-30 14:16:09.220871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.207 [2024-10-30 14:16:09.220900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.207 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.221295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.221324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.221520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.221551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.221819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.221855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.222227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.222256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.222466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.222498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.222873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.222903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.223267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.223296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.223668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.223698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.223943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.223973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.224316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.224346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.224555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.224585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.224818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.224848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.225203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.225233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.225449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.225478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.225822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.225861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.226242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.226272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.226490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.226519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.226790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.226822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.227075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.227103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.227339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.227368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.227724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.227766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.228162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.228191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.228533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.228563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.228798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.228829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.229084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.229115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.229352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.229381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.229806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.229836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.230065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.230093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.230480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.230509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.230864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.230896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.231130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.231160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.231530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.231558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.231802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.231833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.232080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.232108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.232485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.232514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.232767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.208 [2024-10-30 14:16:09.232797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.208 qpair failed and we were unable to recover it. 00:29:11.208 [2024-10-30 14:16:09.233177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.233207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.233431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.233459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.233829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.233859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.234236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.234265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.234500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.234528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.234907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.234937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.235281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.235311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.235547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.235582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.235934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.235963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.236243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.236272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.236503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.236532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.236912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.236941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.237310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.237340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.237549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.237579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.237823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.237854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.237992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.238020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.238254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.238282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.238615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.238643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.238985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.239016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.239385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.239413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.239793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.239824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.240209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.240238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.240602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.240631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.240981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.241011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.241233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.241261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.241615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.241645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.241990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.242022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.242365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.242394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.242789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.242820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.242917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.242944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.243187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.243215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.243506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.243536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.243789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.243819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.244164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.244193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.244411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.244449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.244795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.244826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.245060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.245088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.245291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.245320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.245671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.245700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.246120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.246150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.209 [2024-10-30 14:16:09.246506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.209 [2024-10-30 14:16:09.246537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.209 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.246871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.246902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.247265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.247294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.247662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.247690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.247814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.247851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.248184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.248213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.248572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.248601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.248854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.248884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.249264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.249294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.249636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.249666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.249913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.249943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.250151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.250181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.250546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.250576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.250903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.250934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.251298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.251328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.251559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.251587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.251922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.251952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.252274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.252304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.252684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.252715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.252932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.252962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.253337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.253367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.253583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.253612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.253871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.253902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.254279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.254308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.254403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.254431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.254818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.254847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.255087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.255117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.255467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.255497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.255874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.255905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.256262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.256291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.256677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.256706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.257073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.257102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.257460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.257488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.210 qpair failed and we were unable to recover it. 00:29:11.210 [2024-10-30 14:16:09.257893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.210 [2024-10-30 14:16:09.257925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.258261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.258289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.258627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.258662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.259040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.259070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.259425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.259453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.259814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.259845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.260217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.260247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.260583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.260612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.260998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.261028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.261274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.261302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.261650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.261679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.262073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.262103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.262463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.262492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.262821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.262852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.263288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.263317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.263627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.263655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.264014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.264044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.264411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.264440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.264706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.264734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.265146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.265175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.265529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.265558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.265768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.265796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.266170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.266200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.266572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.266604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.266880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.266911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.267252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.267281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.267627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.267656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.268003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.268033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.268410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.268438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.268805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.268842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.269209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.269238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.269468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.269496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.269856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.269885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.270271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.270300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.270669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.270697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.271116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.271146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.271399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.271428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.271674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.271703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.211 [2024-10-30 14:16:09.272092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.211 [2024-10-30 14:16:09.272122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.211 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.272481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.272510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.272873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.272903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.273143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.273172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.273414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.273442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.273810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.273841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.274116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.274144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.274492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.274521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.274892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.274922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.275269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.275297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.275662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.275693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.275945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.275975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.276362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.276391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.276773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.276804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.277168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.277197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.277567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.277597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.277987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.278018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.278370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.278399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.278775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.278805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.279169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.279197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.279434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.279462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.279847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.279877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.280247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.280276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.280627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.280655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.280996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.281027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.281279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.281308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.281688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.281718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.282109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.282138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.282514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.282544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.282912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.282942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.283292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.283321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.283697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.283725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.284087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.284122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.284329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.284358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.284730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.284780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.285107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.285137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.285492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.285521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.285901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.285930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.286293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.286322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.286693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.286725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.287105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.287135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.212 [2024-10-30 14:16:09.287479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.212 [2024-10-30 14:16:09.287509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.212 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.287870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.287900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.288242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.288272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.288663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.288692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.289059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.289089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.289479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.289511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.289884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.289916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.290276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.290305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.290681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.290710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.291079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.291108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.291473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.291505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.291717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.291759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.292104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.292133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.292491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.292519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.292878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.292909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.293139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.293167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.293532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.293561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.293923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.293954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.294232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.294260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.294609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.294639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.294990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.295020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.295398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.295426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.295789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.295820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.296197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.296227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.296599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.296628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.296995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.297026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.297409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.297438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.297819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.297848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.298232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.298261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.298640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.298669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.299050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.299081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.299439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.299467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.299696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.299726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.300004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.300035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.300133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.300162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.300521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.300552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.300773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.300803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.301186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.301215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.301429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.301459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.301801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.301830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.302064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.302093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.213 [2024-10-30 14:16:09.302461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.213 [2024-10-30 14:16:09.302491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.213 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.302846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.302877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.303234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.303264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.303606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.303634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.303985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.304015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.304392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.304422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.304703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.304733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.304977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.305007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.305249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.305278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.305642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.305671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.305882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.305911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.306152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.306181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.306401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.306429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.306794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.306824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.307072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.307100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.307390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.307420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.307643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.307672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.307891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.307920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.308277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.308313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.308660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.308689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.308916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.308946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.309304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.309333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.309560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.309591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.309798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.309829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.310063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.310091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.310445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.310477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.310832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.310863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.311096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.311124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.311454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.311484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.311759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.311793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.312164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.312193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.312541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.312570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.312789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.312820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.313071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.313100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.313351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.313380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.313759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.313791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.314137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.314167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.314391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.314419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.314632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.314661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.314899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.314928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.315192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.315222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.214 [2024-10-30 14:16:09.315570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.214 [2024-10-30 14:16:09.315599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.214 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.315954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.315984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.316218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.316249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.316590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.316619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.316976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.317006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.317366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.317395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.317607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.317635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.318003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.318033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.318376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.318406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.318618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.318648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.318859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.318889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.319273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.319302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.319458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.319486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.319758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.319787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.320002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.320031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.320405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.320435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.320777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.320808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.321034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.321065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.321472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.321502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.321726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.321768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.322011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.322040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.322286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.322314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.322547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.322577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.322934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.322963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.323351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.323380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.323742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.323794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.324054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.324082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.324357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.324386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.324761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.324792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.325033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.325061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.325449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.325480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.325849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.325878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.326098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.326128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.326229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.326259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.326617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.326646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.215 qpair failed and we were unable to recover it. 00:29:11.215 [2024-10-30 14:16:09.326999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.215 [2024-10-30 14:16:09.327030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.327251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.327281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.327631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.327662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.328037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.328068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.328318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.328348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.328564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.328595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.328887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.328916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.329291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.329320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.329673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.329703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.330063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.330094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.330501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.330536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.330881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.330911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.331259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.331287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.331505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.331535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.331871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.331900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.332278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.332308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.332514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.332542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.332961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.332992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.333235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.333263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.333437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.333466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.333829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.333859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.334236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.334267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.334490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.334519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.334894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.334925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.335340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.335370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.335717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.335757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.335978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.336007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.336223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.336252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.336546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.336577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.336926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.336957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.337332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.337363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.337741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.337785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.338000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.338029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.338406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.338435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.338782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.338814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.339092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.339123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.339479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.339514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.339939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.339969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.340338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.340368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.340735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.340779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.216 qpair failed and we were unable to recover it. 00:29:11.216 [2024-10-30 14:16:09.341011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.216 [2024-10-30 14:16:09.341044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.341436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.341465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.341825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.341854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.342101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.342129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.342509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.342540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.342639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.342668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be120 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.342973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bbf30 is same with the state(6) to be set 00:29:11.217 Read completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Read completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Read completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Read completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Read completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Read completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Read completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Read completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Read completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Read completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Read completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Read completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Write completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Read completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Write completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Write completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Write completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Write completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Write completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Read completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Write completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Read completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Read completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Read completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Write completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Read completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Write completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Read completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Read completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Read completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Write completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 Read completed with error (sct=0, sc=8) 00:29:11.217 starting I/O failed 00:29:11.217 [2024-10-30 14:16:09.343852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.217 [2024-10-30 14:16:09.344209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.344266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.344653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.344684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.345179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.345284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.345730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.345789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.346150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.346181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.346538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.346568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.347028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.347131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.347543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.347580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.347960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.347993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.348347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.348377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.348762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.348794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.349135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.349167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.349517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.349546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.349899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.349930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.350300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.350330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.350703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.350739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.351087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.351117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.351480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.351510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.351889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.351920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.352316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.352347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.352610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.352638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.352863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.352893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.353271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.353301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.217 [2024-10-30 14:16:09.353668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.217 [2024-10-30 14:16:09.353698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.217 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.354074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.354112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.354468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.354497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.354838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.354870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.355241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.355272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.355625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.355653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.356044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.356075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.356434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.356463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.356814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.356848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.357224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.357253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.357620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.357650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.357994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.358024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.358404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.358434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.358801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.358834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.359049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.359078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.359450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.359480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.359760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.359797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.360059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.360088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.360463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.360493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.360859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.360891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.361279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.361309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.361658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.361689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.362094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.362126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.362483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.362513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.362770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.362805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.363165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.363196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.363542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.363571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.363939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.363971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.364324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.364356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.364726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.364765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.365122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.365152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.365535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.365565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.365952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.365984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.366323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.366352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.366710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.366740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.367107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.367139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.367494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.367524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.367870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.367901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.368131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.368163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.368392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.368422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.368676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.218 [2024-10-30 14:16:09.368705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.218 qpair failed and we were unable to recover it. 00:29:11.218 [2024-10-30 14:16:09.368924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.368962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.369323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.369353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.369730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.369767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.370187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.370217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.370552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.370581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.370974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.371007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.371365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.371396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.371646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.371680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.372053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.372082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.372362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.372391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.372735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.372774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.373153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.373183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.373564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.373593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.373957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.373987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.374369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.374398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.374784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.374816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.375206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.375235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.375680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.375709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.376007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.376036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.376387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.376417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.376783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.376815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.377138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.377167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.377527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.377556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.377932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.377962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.378238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.378268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.378523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.378551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.378789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.378823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.379055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.379086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.379445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.379475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.379844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.379875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.380266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.380296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.380661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.380690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.381091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.381121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.381456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.381487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.381860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.381892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.382275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.219 [2024-10-30 14:16:09.382306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.219 qpair failed and we were unable to recover it. 00:29:11.219 [2024-10-30 14:16:09.382757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.382788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.383150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.383179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.383537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.383567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.383947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.383979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.384317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.384352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.384619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.384649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.385063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.385093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.385434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.385466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.385851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.385883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.386235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.386266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.386523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.386553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.386768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.386797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.387017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.387056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.387437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.387466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.387836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.387866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.388175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.388206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.388543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.388572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.388924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.388954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.389326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.389356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.389795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.389826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.390193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.390222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.390589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.390619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.390958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.390989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.391338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.391368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.391742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.391782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.392011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.392040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.392376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.392407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.392772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.392804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.393042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.393070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.393215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.393244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.393605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.393634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.393888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.393923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.394214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.394244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.394468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.394497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.394728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.394772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.395064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.395094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.395486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.395517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.395849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.395882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.396259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.396289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.396618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.396648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.220 [2024-10-30 14:16:09.396891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.220 [2024-10-30 14:16:09.396921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.220 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.397217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.397247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.397611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.397643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.397867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.397898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.398278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.398315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.398534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.398563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.398898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.398928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.399142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.399171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.399420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.399450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.399819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.399852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.400104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.400133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.400513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.400542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.400923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.400953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.401316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.401345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.401489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.401517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.401872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.401903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.402280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.402311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.402761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.402791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.403166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.403196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.403550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.403581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.403826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.403858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.404233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.404263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.404490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.404519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.404806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.404836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.405201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.405230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.405477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.405506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.405871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.405900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.406110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.406138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.406502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.406534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.406881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.406911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.407307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.407336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.407687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.407718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.408090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.408120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.408341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.408370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.408724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.408765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.409085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.409114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.409476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.409505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.409790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.409819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.410058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.410087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.410320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.410349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.410689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.410718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.221 qpair failed and we were unable to recover it. 00:29:11.221 [2024-10-30 14:16:09.411048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.221 [2024-10-30 14:16:09.411078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.411437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.411467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.411784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.411815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.412136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.412173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.412515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.412544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.412903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.412935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.413372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.413401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.413767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.413796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.414163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.414192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.414423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.414452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.414803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.414834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.415065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.415093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.415467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.415497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.415857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.415888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.416275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.416304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.416643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.416673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.417009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.417038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.417395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.417425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.417786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.417817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.417998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.418027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.418299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.418327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.418686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.418715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.419121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.419151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.419505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.419535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.419661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.419689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.420056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.420087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.420458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.420487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.420853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.420882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.421271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.421300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.421677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.421707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.421968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.421998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.422316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.422346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.422553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.422581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.422818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.422847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.423055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.423084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.423447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.423476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.423848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.423877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.424239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.424267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.424645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.424675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.425052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.425083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.222 [2024-10-30 14:16:09.425459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.222 [2024-10-30 14:16:09.425488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.222 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.425837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.425867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.426193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.426223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.426610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.426641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.426884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.426914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.427163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.427193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.427557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.427585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.427923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.427953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.428322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.428350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.428712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.428742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.429106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.429136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.429495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.429523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.429905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.429936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.430299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.430328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.430690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.430720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.431106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.431136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.431507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.431536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.431908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.431938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.432299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.432328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.432686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.432715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.433087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.433118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.433481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.433513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.433858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.433890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.434155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.434186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.434571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.434600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.434959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.434989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.435359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.435388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.435757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.435788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.436136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.436165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.436407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.436436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.436602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.436639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.437013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.437043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.437258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.437286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.437643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.437674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.438060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.438091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.438459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.438487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.438868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.223 [2024-10-30 14:16:09.438898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.223 qpair failed and we were unable to recover it. 00:29:11.223 [2024-10-30 14:16:09.439256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.439285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.439556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.439584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.439988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.440019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.440372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.440401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.440771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.440801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.441153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.441182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.441550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.441579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.441965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.441996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.442201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.442230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.442454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.442482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.442845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.442876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.443248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.443277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.443637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.443665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.444028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.444058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.444277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.444309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.444687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.444717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.445067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.445100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.445345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.445374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.445693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.445722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.446033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.446064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.446415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.446445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.446812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.446843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.447186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.447217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.447585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.447613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.447968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.448000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.448360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.448390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.448756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.448788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.449105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.449135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.449519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.449547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.449903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.449932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.450304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.450334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.450705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.450734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.450964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.450993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.451372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.451409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.451790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.451821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.452193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.452223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.452588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.452617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.452971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.453009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.453352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.453381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.453595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.453625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.224 qpair failed and we were unable to recover it. 00:29:11.224 [2024-10-30 14:16:09.453954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.224 [2024-10-30 14:16:09.453984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.454316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.454346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.454706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.454735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.455133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.455162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.455520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.455549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.455909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.455941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.456299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.456329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.456586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.456616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.456981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.457011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.457371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.457400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.457623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.457654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.457998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.458029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.458371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.458401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.458763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.458793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.459156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.459185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.459552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.459581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.459936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.459966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.460346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.460377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.460723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.460770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.461106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.461135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.461492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.461522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.461896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.461925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.462215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.462243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.462579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.462611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.462848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.462879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.463269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.463297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.463557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.463588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.463955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.463987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.464346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.464375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.464753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.464785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.465142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.465172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.465528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.465557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.465933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.465963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.466326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.466361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.466724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.466761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.467126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.467155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.467367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.467396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.467769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.467798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.468166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.468194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.468567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.468595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.225 [2024-10-30 14:16:09.468986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.225 [2024-10-30 14:16:09.469016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.225 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.469393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.469422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.469774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.469804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.470028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.470056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.470311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.470340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.470596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.470625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.470949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.470979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.471341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.471370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.471755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.471788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.472158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.472189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.472540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.472570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.472780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.472810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.473196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.473225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.473590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.473619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.473856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.473886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.474217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.474248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.474599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.474629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.474987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.475018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.475251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.475280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.475429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.475461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.475688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.475726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.475967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.475996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.476367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.476396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.476535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.476563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.476933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.476963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.477331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.477360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.477727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.477762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.478126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.478156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.478387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.478416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.478670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.478699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.479097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.479126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.479352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.479381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.479735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.479785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.480169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.480204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.480546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.480576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.480923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.480954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.481168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.481197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.481568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.481596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.481976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.482006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.482367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.482398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.482771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.226 [2024-10-30 14:16:09.482801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.226 qpair failed and we were unable to recover it. 00:29:11.226 [2024-10-30 14:16:09.482985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.483014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.483254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.483282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.483631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.483660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.484049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.484080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.484316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.484344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.484721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.484760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.484863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.484891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.485263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.485292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.485534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.485564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.485926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.485956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.486171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.486199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.486550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.486579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.486803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.486833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.487062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.487091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.487337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.487372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.487723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.487763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.488104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.488133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.488354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.488392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.488760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.488790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.489023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.489052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.489325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.489357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.489704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.489734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.489956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.489985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.490223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.490252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.490613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.490642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.490907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.490936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.491036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.491064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9dac000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 Read completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Read completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Read completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Read completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Read completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Read completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Write completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Write completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Read completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Read completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Read completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Write completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Write completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Write completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Read completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Read completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Read completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Read completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Write completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Read completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Write completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Read completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Write completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Write completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Write completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Write completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Write completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Read completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Write completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Write completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Read completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 Write completed with error (sct=0, sc=8) 00:29:11.227 starting I/O failed 00:29:11.227 [2024-10-30 14:16:09.491886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:11.227 [2024-10-30 14:16:09.492467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.492526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.227 [2024-10-30 14:16:09.492790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.227 [2024-10-30 14:16:09.492828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.227 qpair failed and we were unable to recover it. 00:29:11.496 [2024-10-30 14:16:09.493259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-10-30 14:16:09.493368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-10-30 14:16:09.493671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-10-30 14:16:09.493708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-10-30 14:16:09.494239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-10-30 14:16:09.494346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-10-30 14:16:09.494764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-10-30 14:16:09.494803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-10-30 14:16:09.495170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-10-30 14:16:09.495201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-10-30 14:16:09.495461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-10-30 14:16:09.495490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-10-30 14:16:09.495808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-10-30 14:16:09.495863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-10-30 14:16:09.496253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-10-30 14:16:09.496281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-10-30 14:16:09.496615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-10-30 14:16:09.496645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-10-30 14:16:09.496867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-10-30 14:16:09.496899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-10-30 14:16:09.497262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-10-30 14:16:09.497293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-10-30 14:16:09.497555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-10-30 14:16:09.497584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-10-30 14:16:09.498024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-10-30 14:16:09.498055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-10-30 14:16:09.498311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-10-30 14:16:09.498346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-10-30 14:16:09.498755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-10-30 14:16:09.498788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-10-30 14:16:09.499022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-10-30 14:16:09.499050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-10-30 14:16:09.499227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-10-30 14:16:09.499258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-10-30 14:16:09.499477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-10-30 14:16:09.499507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.499724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.499766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.500125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.500154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.500539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.500568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.500790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.500821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.501179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.501208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.501432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.501461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.501693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.501726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.501982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.502015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.502400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.502429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.502718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.502757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.503134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.503163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.503423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.503452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.503674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.503704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.504054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.504085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.504306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.504336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.504464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.504498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.504772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.504804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.505169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.505198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.505421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.505458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.505681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.505714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.506089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.506121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.506479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.506508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.506866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.506898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.507263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.507292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.507675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.507705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.508053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.508086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.508470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.508500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.508847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.508877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.509247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.509276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.509631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.509660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.510014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.510045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.510425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.510455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.510832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.510863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.511202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.511232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.511593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.511621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.511995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.512025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.512377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.512409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.512774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.512804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.513129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.513158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.513370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.513403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.513771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.513802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.514144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.514175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.514536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.514567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.514777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.514806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.515162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.515191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.515497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.515530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.515887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.515918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.516280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.516309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.516667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.516697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.516918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.516949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.517192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.517220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.517571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.517601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.517972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.518002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.518253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.518281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.518665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.518694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.519050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.519082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.519436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.519466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.519822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.519853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.520239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.520276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.520630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.520660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.521020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.521051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.521414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.521444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.521789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.521820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.522188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.522219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.522572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.522602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.522972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.523002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.523335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.523363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.523739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.523779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.524124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.524154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.524514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.524543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.524908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.524938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.525301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.525329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.525669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.525700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.526077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.526107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.526470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.526499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.526862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-10-30 14:16:09.526893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-10-30 14:16:09.527272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.527300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.527671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.527699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.528138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.528169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.528524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.528554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.528814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.528847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.529078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.529106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.529469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.529498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.529882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.529911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.530278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.530306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.530667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.530696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.531088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.531119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.531493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.531521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.531800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.531831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.532210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.532239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.532597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.532627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.532975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.533006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.533366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.533394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.533767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.533796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.534017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.534046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.534285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.534313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.534605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.534634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.534997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.535028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.535385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.535420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.535782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.535813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.536203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.536232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.536635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.536663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.537016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.537047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.537299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.537329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.537695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.537725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.538085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.538116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.538490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.538519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.538888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.538918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.539312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.539341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.539692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.539721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.540056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.540085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.540463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.540493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.540760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.540791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.541021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.541050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.541440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.541468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.541680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.541709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.542078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.542108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.542464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.542492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.542870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.542901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.543276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.543305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.543644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.543671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.544038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.544068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.544327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.544356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.544711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.544740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.545106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.545137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.545494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.545525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.545872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.545901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.546281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.546311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.546640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.546669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.546935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.546965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.547332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.547360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.547724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.547762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.548134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.548163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.548522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.548551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.548920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.548950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.549320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.549349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.549710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.549738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.550123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.550153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.550506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.550542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.550813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.550844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.551195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.551225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.551606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.551635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.551973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.552004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.552365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.552394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.552642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.552670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.552888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.552919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.553286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.553314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.553689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.553718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.554098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.554127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.554490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.554520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-10-30 14:16:09.554892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-10-30 14:16:09.554922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.555302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.555332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.555700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.555731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.556091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.556121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.556471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.556499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.556721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.556757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.556980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.557009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.557357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.557385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.557756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.557785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.558050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.558081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.558480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.558509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.558865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.558897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.559123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.559152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.559495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.559524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.559863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.559893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.560284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.560315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.560686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.560715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.561071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.561101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.561470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.561500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.561869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.561899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.562268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.562297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.562654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.562682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.563030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.563060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.563406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.563436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.563784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.563814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.564122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.564152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.564512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.564541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.564908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.564938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.565305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.565334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.565687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.565716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.565935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.565965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.566206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.566234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.566584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.566612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.566975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.567006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.567346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.567375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.567734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.567771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.568095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.568124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.568493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.568521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.568888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.568916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.569284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.569313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.569671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.569699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.569917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.569946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.570277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.570305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.570667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.570696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.571083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.571112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.571485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.571513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.571883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.571913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.572263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.572293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.572654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.572683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.573066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.573096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.573466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.573494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.573784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.573813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.574185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.574214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.574580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.574608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.574861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.574891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.575258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.575293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.575654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.575683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.576010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.576041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.576418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.576447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.576806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.576836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.577217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.577245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.577614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.577643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.578035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.578066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.578424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.578453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.578827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.578857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.579236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.579266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.579630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.579660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.580047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.580077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.580447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.580477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.580846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.580876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.581163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.581192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.581431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.581460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.499 qpair failed and we were unable to recover it. 00:29:11.499 [2024-10-30 14:16:09.581827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.499 [2024-10-30 14:16:09.581858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.582226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.582256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.582628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.582657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.583004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.583036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.583416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.583445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.583694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.583722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.584083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.584113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.584478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.584508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.584872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.584902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.585155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.585184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.585542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.585572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.585960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.585989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.586346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.586375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.586758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.586788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.587176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.587204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.587564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.587593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.587976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.588007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.588384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.588413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.588775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.588806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.589133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.589163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.589501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.589532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.589905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.589936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.590303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.590332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.590713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.590755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.591093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.591122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.591492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.591521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.591888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.591919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.592287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.592316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.592677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.592706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.592930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.592960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.593325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.593354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.593733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.593771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.594146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.594175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.594536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.594565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.594925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.594955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.595326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.595357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.595614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.595642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.596048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.596078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.596305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.596333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.596698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.596727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.596978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.597007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.597215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.597244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.597466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.597495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.597853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.597883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.598123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.598151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.598518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.598547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.598782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.598812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.599258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.599286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.599653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.599682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.599914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.599943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.600176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.600205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.600565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.600593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.600964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.600994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.601272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.601300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.601544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.601573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.601830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.601863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.602213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.602244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.602462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.602493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.602819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.602849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.603093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.603121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.603474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.603504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.603731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.603768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.604127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.604155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.604383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.604419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.604778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.604809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.605140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.605169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.605544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.605573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.605802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.605831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.606079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.606109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.606514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.606543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.606854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.606884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.607232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.607261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.607631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.607660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.608025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.608056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.608275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.608303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.608517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.608546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.500 qpair failed and we were unable to recover it. 00:29:11.500 [2024-10-30 14:16:09.608773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.500 [2024-10-30 14:16:09.608804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.609161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.609190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.609426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.609456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.609674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.609702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.609908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.609939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.610313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.610341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.610697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.610727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.610970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.611000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.611421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.611451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.611834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.611865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.612115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.612143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.612496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.612525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.612766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.612797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.613156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.613185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.613549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.613578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.613841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.613871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.614124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.614152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.614367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.614395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.614634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.614662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.614888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.614919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.615305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.615334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.615590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.615622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.615898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.615928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.616149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.616179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.616557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.616588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.616925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.616954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.617192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.617220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.617569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.617604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.617847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.617876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.618226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.618255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.618636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.618667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.618884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.618914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.619286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.619316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.619664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.619694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.619946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.619977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.620333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.620363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.620582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.620613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.620917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.620950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.621283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.621314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.621650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.621681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.622047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.622077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.622442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.622471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.622690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.622719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.623075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.623104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.623482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.623512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.623616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.623647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.624018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.624048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.624265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.624294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.624640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.624669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.624890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.624921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.625143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.625178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.625541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.625570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.625797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.625827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.626164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.626193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.626559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.626589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.626931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.626961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.627185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.627216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.627582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.627611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.628041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.628071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.628422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.628453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.628827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.628859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.629084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.629113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.629262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.629295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.629698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.629727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.629950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.629980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.630269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.630299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.630516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.630549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.630780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.630820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.631211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.631240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.631595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.631625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.631972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.632004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.501 qpair failed and we were unable to recover it. 00:29:11.501 [2024-10-30 14:16:09.632358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.501 [2024-10-30 14:16:09.632388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.632613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.632643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.632913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.632944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.633182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.633212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.633595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.633625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.633866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.633899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.634251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.634280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.634632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.634662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.635008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.635038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.635401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.635430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.635791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.635822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.636181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.636211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.636584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.636614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.636835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.636866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.637234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.637264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.637616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.637646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.638009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.638039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.638398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.638427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.638789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.638819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.639180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.639211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.639466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.639495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.639846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.639877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.640240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.640270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.640622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.640652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.641013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.641045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.641398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.641428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.641795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.641826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.642185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.642215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.642592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.642620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.642949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.642980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.643380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.643410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.643789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.643820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.644175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.644206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.644591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.644621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.644994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.645025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.645395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.645425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.645780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.645817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.646205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.646234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.646594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.646623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.646870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.646903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.647147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.647178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.647422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.647450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.647696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.647727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.648061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.648090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.648453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.648483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.648847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.648878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.649247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.649277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.649617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.649647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.649992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.650023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.650382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.650411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.650773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.650804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.651128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.651156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.651534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.651564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.651780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.651811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.652125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.652156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.652479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.652510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.652858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.652890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.653230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.653261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.653651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.653680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.654043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.654074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.654436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.654466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.654860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.654891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.655262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.655292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.655633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.655662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.656026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.656056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.656386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.656416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.656773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.656805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.657146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.657177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.657383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.657413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.657774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.657804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.658165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.658194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.658534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.658564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.658930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.658960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.659330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.659359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.659715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.659745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.660099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.660129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.660485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.660520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.502 qpair failed and we were unable to recover it. 00:29:11.502 [2024-10-30 14:16:09.660837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.502 [2024-10-30 14:16:09.660868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.661198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.661228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.661452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.661481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.661841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.661872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.662207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.662239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.662461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.662491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.662824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.662854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.663190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.663220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.663577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.663606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.663856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.663887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.664231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.664262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.664658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.664687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.665064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.665095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.665476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.665506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.665856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.665886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.666106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.666135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.666482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.666511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.666864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.666895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.667279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.667308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.667675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.667705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.668041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.668072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.668282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.668311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.668680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.668709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.669087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.669118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.669477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.669507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.669879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.669910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.670286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.670317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.670676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.670705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.671092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.671123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.671474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.671504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.671866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.671897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.672101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.672132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.672482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.672512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.672877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.672910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.673269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.673299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.673656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.673687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.674067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.674098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.674428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.674458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.674719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.674759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.675096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.675132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.675496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.675525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.675880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.675910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.676143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.676172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.676403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.676431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.676813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.676842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.677193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.677223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.677578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.677606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.677832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.677860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.678246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.678275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.678508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.678537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.678883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.678912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.679275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.679304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.679671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.679700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.680091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.680122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.680478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.680505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.680710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.680738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.680972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.681001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.681372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.681400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.681656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.681684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.681917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.681949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.682319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.682348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.682586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.682618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.682876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.682913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.683293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.683323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.683690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.683719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.684078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.684108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.684559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.684589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.684914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.684944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.685166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.685194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.685430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.685461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.685854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.685884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.686254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.686284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.686654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.503 [2024-10-30 14:16:09.686682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.503 qpair failed and we were unable to recover it. 00:29:11.503 [2024-10-30 14:16:09.686899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.686928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.687304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.687332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.687551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.687579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.687962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.687992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.688208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.688236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.688483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.688513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.688891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.688927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.689367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.689396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.689780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.689810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.690035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.690064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.690316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.690345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.690577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.690608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.690860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.690889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.691225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.691255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.691447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.691477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.691848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.691877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.692262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.692291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.692663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.692692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.693094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.693123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.693476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.693504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.693736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.693774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.694170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.694199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.694638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.694666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.695132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.695161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.695572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.695600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.695983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.696013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.696240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.696268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.696423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.696452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.696763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.696791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.697166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.697195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.697562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.697591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.697975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.698004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.698359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.698387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.698638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.698667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.698945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.698976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.699351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.699379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.699581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.699610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.699843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.699873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.700100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.700128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.700492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.700520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.700786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.700815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.701167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.701196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.701569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.701597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.701834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.701864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.702203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.702232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.702603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.702632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.703001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.703038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.703403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.703432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.703776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.703807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.704035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.704063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.704300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.704328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.704557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.704585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.704860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.704890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.705255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.705284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.705653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.705681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.706049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.706079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.706431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.706461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.706681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.706710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.706885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.706915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.707302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.707330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.707589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.707619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.707823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.707854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.708222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.708251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.708478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.708507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.708845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.708875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.709279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.709307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.709685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.709714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.709943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.709972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.710208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.710237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.710477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.710508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.710741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.710776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.711009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.711038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.711406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.711434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.711648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.711677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.712084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.712114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.712335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.712363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.712604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.712632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-10-30 14:16:09.713008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.504 [2024-10-30 14:16:09.713037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.713357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.713387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.713575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.713603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.713811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.713841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.713948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.713977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.714193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.714222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.714451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.714480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.714881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:11.505 [2024-10-30 14:16:09.714913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.715060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.715089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 Read completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Read completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:11.505 Read completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Read completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Read completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Read completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Read completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Write completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Read completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Write completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Read completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Write completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Read completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Read completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Read completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Read completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Read completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Write completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Write completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:11.505 Read completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Write completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Write completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Write completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Read completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Read completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Write completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Write completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Read completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Write completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Read completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Read completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 Read completed with error (sct=0, sc=8) 00:29:11.505 starting I/O failed 00:29:11.505 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:11.505 [2024-10-30 14:16:09.715891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.505 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.505 [2024-10-30 14:16:09.716352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.716415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9db4000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.716679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.716710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9db4000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.717165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.717272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9db4000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.717724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.717780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9db4000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.718001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.718030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9db4000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.718333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.718372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9db4000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.718761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.718792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9db4000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.719030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.719060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9db4000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.719311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.719342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9db4000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.719577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.719609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9db4000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.719942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.719974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9db4000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.720208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.720238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9db4000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.720580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.720613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9db4000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.720971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.721003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9db4000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.721253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.721284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9db4000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.721665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.721697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9db4000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.722075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.722106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9db4000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.722482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.722512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9db4000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.722639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.722672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9db4000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.723031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.723061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9db4000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.723162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.723191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9db4000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.723631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.723727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.723995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.724029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.724278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.724308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.724660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.724692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.724937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.724969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.725288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.725319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.725676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.725706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.726172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.726206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.726568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.726599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.726845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.726884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.727270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.727313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.727544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.727576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.727928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.727959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.728330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.728359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.728724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.728764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.729025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.729054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.729406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.729437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.729781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.729814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.730091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.730121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.730359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.730388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.730764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.730798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.731191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.731220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.731570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.731599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.731826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.731856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.732204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.732233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.732568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.732598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.733056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.733086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.733305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.505 [2024-10-30 14:16:09.733335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-10-30 14:16:09.733582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.733610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.733838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.733871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.734120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.734151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.734442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.734472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.734812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.734841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.735207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.735238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.735459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.735488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.735815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.735846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.736179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.736209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.736425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.736454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.736774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.736804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.737168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.737198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.737556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.737586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.737979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.738011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.738359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.738390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.738780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.738810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.738904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.738932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.739073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.739103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.739349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.739378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.739583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.739612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.739955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.739986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.740216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.740244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.740615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.740651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.741022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.741052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.741398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.741426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.741789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.741820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.742188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.742217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.742446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.742476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.742680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.742709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.743074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.743105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.743456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.743486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.743847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.743878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.744254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.744284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.744514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.744544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.744888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.744919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.745271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.745303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.745687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.745716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.745874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.745908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.746146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.746184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.746553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.746583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.746805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.746835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.747169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.747198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.747331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.747359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.747575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.747604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.747990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.748020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.748361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.748391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.748618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.748646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.748997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.749028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.749362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.749393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.749603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.749638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.749986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.750017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.750374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.750403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.750587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.750615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.750904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.750933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.751312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.751342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.751699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.751729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.751980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.752009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.752392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.752421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.752543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.752571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.752911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.752942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.753261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.753289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.753642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.753672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.754056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.754087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.754425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.754455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.754805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.754835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.755194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.755222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.755588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.755617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.755968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.755999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.756352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.756381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.756761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.756792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.757167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.757196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.757566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.757595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.757862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.757893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-10-30 14:16:09.758124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.758153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:11.506 [2024-10-30 14:16:09.758526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.758557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:11.506 [2024-10-30 14:16:09.758896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.506 [2024-10-30 14:16:09.758929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.507 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.507 [2024-10-30 14:16:09.759277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.759317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.507 [2024-10-30 14:16:09.759685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.759715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.760077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.760108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.760473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.760501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.760876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.760905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.761144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.761172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.761400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.761432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.761792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.761823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.762076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.762105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.762470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.762500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.762738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.762777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.763101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.763138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.763499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.763529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.763890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.763920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.764129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.764157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.764443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.764471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.764832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.764862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.765066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.765094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.765431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.765461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.765801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.765832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.766157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.766187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.766573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.766602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.766962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.766993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.767377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.767405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.767785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.767815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.768154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.768185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.768557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.768586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.768949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.768980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.769357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.769386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.769744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.769792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.770159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.770188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.770556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.770584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.770981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.771011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.771419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.771448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.771813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.771842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.772199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.772227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.772593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.772623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.772979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.773009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.773364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.773394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.773692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.773720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.774095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.774124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.774484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.774512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.774869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.774899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.775254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.775285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.775662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.775690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.776055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.776084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.776448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.776477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.776826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.776856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.777119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.777147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.777508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.777537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.777895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.777925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.778285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.778327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.778677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.778706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.779114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.779143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.779364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.779392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.779764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.779795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.780172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.780201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.780565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.780593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.780984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.781016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.781391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.781420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.781786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.781815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.782166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.782194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.782566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.782594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.782991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.783021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.783381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.783409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.783763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.783793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.784174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.784202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.784572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.784600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.784952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.784982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.785348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.785376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.785787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.785819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.786171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.786202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.786587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.786617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.786923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.786953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.507 [2024-10-30 14:16:09.787318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.507 [2024-10-30 14:16:09.787347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.507 qpair failed and we were unable to recover it. 00:29:11.771 [2024-10-30 14:16:09.787718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.771 [2024-10-30 14:16:09.787760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.771 qpair failed and we were unable to recover it. 00:29:11.771 [2024-10-30 14:16:09.788120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.771 [2024-10-30 14:16:09.788149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.771 qpair failed and we were unable to recover it. 00:29:11.771 [2024-10-30 14:16:09.788533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.771 [2024-10-30 14:16:09.788563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.771 qpair failed and we were unable to recover it. 00:29:11.771 [2024-10-30 14:16:09.788944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.771 [2024-10-30 14:16:09.788975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.771 qpair failed and we were unable to recover it. 00:29:11.771 [2024-10-30 14:16:09.789339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.771 [2024-10-30 14:16:09.789368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.771 qpair failed and we were unable to recover it. 00:29:11.771 [2024-10-30 14:16:09.789739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.771 [2024-10-30 14:16:09.789777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.771 qpair failed and we were unable to recover it. 00:29:11.771 [2024-10-30 14:16:09.790147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.771 [2024-10-30 14:16:09.790175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.771 qpair failed and we were unable to recover it. 00:29:11.771 [2024-10-30 14:16:09.790539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.771 [2024-10-30 14:16:09.790569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.771 qpair failed and we were unable to recover it. 00:29:11.771 [2024-10-30 14:16:09.790907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.771 [2024-10-30 14:16:09.790938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.771 qpair failed and we were unable to recover it. 00:29:11.771 [2024-10-30 14:16:09.791278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.771 [2024-10-30 14:16:09.791306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.771 qpair failed and we were unable to recover it. 00:29:11.771 [2024-10-30 14:16:09.791619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.771 [2024-10-30 14:16:09.791649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.771 qpair failed and we were unable to recover it. 00:29:11.771 [2024-10-30 14:16:09.791997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.771 [2024-10-30 14:16:09.792027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.771 qpair failed and we were unable to recover it. 00:29:11.771 [2024-10-30 14:16:09.792397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.771 [2024-10-30 14:16:09.792426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.771 qpair failed and we were unable to recover it. 00:29:11.771 [2024-10-30 14:16:09.792785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.771 [2024-10-30 14:16:09.792815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.771 qpair failed and we were unable to recover it. 00:29:11.771 [2024-10-30 14:16:09.793182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.771 [2024-10-30 14:16:09.793211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.793593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.793622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.793971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.794007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.794214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.794243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.794449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.794479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.794844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.794873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.795251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.795281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.795656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.795685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.796040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.796071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.796281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.796310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.796575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.796603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.796994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.797024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.797400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.797429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.797764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.797793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.798040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.798069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.798286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.798313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.798686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.798715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.799058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.799089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.799440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.799470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.799830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.799862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.800188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.800217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.800441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.800469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.800824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.800854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.801238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.801268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.801487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.801517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.801881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.801911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.802123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.802151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.802517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.802545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.802923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.802953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.803325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.803356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.803687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.803715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.804081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.804111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.804331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.804359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.804579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.804607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.804981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.805011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.805389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.805418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 Malloc0 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.805698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.805726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.806108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.806137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.772 [2024-10-30 14:16:09.806377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 [2024-10-30 14:16:09.806406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.772 [2024-10-30 14:16:09.806754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.772 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:11.772 [2024-10-30 14:16:09.806784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.772 qpair failed and we were unable to recover it. 00:29:11.773 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.773 [2024-10-30 14:16:09.807158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.807187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.773 [2024-10-30 14:16:09.807551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.807580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.807673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.807701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.808067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.808096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.808349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.808378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.808589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.808618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.808846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.808877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.809254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.809283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.809641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.809670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.810043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.810074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.810428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.810458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.810822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.810852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.811058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.811086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.811359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.811389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.811603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.811633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.811855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.811884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.812116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.812144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.812500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.812529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.812810] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.773 [2024-10-30 14:16:09.812871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.812902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.813273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.813301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.813548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.813581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.813818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.813848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.814243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.814273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.814624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.814653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.815014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.815043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.815267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.815296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.815678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.815706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.815843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.815876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.816216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.816246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.816595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.816625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.816896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.816926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.817188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.817218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.817448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.817477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.817827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.817856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.818076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.818105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.818357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.818385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.818624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.818652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.818886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.818919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-10-30 14:16:09.819180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-10-30 14:16:09.819212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.819440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.819469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.819869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.819905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.820327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.820357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.820600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.820631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.820889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.820920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.821281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.821309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.821690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.821720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.774 [2024-10-30 14:16:09.821954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.821985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:11.774 [2024-10-30 14:16:09.822340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.822369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.774 [2024-10-30 14:16:09.822611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.822643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.774 [2024-10-30 14:16:09.822891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.822921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.823333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.823362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.823731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.823768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.823991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.824020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.824409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.824438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.824683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.824712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.825140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.825170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.825525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.825553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.825784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.825814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.826219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.826248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.826503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.826531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.826875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.826906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.827133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.827163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.827506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.827535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.827789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.827822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.828104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.828132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.828514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.828544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.828899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.828930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.829146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.829175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.829416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.829444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.829825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.829855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.830220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.830249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.830626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.830654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.830914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.830944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.831311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.831341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.831573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.831602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.831816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.831847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.832102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-10-30 14:16:09.832132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-10-30 14:16:09.832378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.832407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.832764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.832804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.833132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.833161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.833443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.833473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.833735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.833778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.775 [2024-10-30 14:16:09.834047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.834076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:11.775 [2024-10-30 14:16:09.834453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.834493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.775 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.775 [2024-10-30 14:16:09.834794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.834826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.835192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.835223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.835588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.835616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.835982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.836013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.836389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.836418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.836778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.836807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.837152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.837183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.837531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.837566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.837950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.837980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.838328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.838356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.838715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.838744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.839098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.839128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.839468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.839497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.839838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.839868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.840225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.840254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.840629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.840658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.841040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.841070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.841407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.841439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.841797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.841828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.842181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.842217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.842567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.842597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.842816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.842847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.843253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.843282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.843635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.843664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.844014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.844046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.844406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.844435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.844788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-10-30 14:16:09.844821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-10-30 14:16:09.845211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-10-30 14:16:09.845241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-10-30 14:16:09.845619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-10-30 14:16:09.845647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.776 [2024-10-30 14:16:09.846058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-10-30 14:16:09.846089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:11.776 [2024-10-30 14:16:09.846454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-10-30 14:16:09.846482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.776 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.776 [2024-10-30 14:16:09.846844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-10-30 14:16:09.846878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-10-30 14:16:09.847229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-10-30 14:16:09.847257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-10-30 14:16:09.847568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-10-30 14:16:09.847597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-10-30 14:16:09.847991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-10-30 14:16:09.848022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-10-30 14:16:09.848386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-10-30 14:16:09.848414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-10-30 14:16:09.848731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-10-30 14:16:09.848784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-10-30 14:16:09.849169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-10-30 14:16:09.849198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-10-30 14:16:09.849578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-10-30 14:16:09.849606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-10-30 14:16:09.850002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-10-30 14:16:09.850033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-10-30 14:16:09.850405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-10-30 14:16:09.850435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-10-30 14:16:09.850766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-10-30 14:16:09.850796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-10-30 14:16:09.851130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-10-30 14:16:09.851158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-10-30 14:16:09.851469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-10-30 14:16:09.851498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-10-30 14:16:09.851882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-10-30 14:16:09.851913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-10-30 14:16:09.852240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-10-30 14:16:09.852269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-10-30 14:16:09.852635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-10-30 14:16:09.852665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-10-30 14:16:09.852875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-10-30 14:16:09.852905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9da8000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-10-30 14:16:09.853204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.776 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.776 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:11.776 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.776 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.776 [2024-10-30 14:16:09.864149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.776 [2024-10-30 14:16:09.864314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.776 [2024-10-30 14:16:09.864363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.776 [2024-10-30 14:16:09.864386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.776 [2024-10-30 14:16:09.864408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:11.776 [2024-10-30 14:16:09.864462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.776 14:16:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1210698 00:29:11.776 [2024-10-30 14:16:09.873910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.776 [2024-10-30 14:16:09.874060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.776 [2024-10-30 14:16:09.874091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.776 [2024-10-30 14:16:09.874107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.776 [2024-10-30 14:16:09.874120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:11.776 [2024-10-30 14:16:09.874153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-10-30 14:16:09.883836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.776 [2024-10-30 14:16:09.883923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.776 [2024-10-30 14:16:09.883945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.776 [2024-10-30 14:16:09.883956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.776 [2024-10-30 14:16:09.883966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:11.776 [2024-10-30 14:16:09.883989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-10-30 14:16:09.893887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.776 [2024-10-30 14:16:09.893961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.776 [2024-10-30 14:16:09.893979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.776 [2024-10-30 14:16:09.893986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.776 [2024-10-30 14:16:09.893993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:11.776 [2024-10-30 14:16:09.894011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-10-30 14:16:09.903905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.776 [2024-10-30 14:16:09.903980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.776 [2024-10-30 14:16:09.903997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.776 [2024-10-30 14:16:09.904004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.776 [2024-10-30 14:16:09.904011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:11.776 [2024-10-30 14:16:09.904029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-10-30 14:16:09.913887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.777 [2024-10-30 14:16:09.913955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.777 [2024-10-30 14:16:09.913975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.777 [2024-10-30 14:16:09.913982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.777 [2024-10-30 14:16:09.913989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:11.777 [2024-10-30 14:16:09.914006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-10-30 14:16:09.923918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.777 [2024-10-30 14:16:09.923988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.777 [2024-10-30 14:16:09.924011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.777 [2024-10-30 14:16:09.924019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.777 [2024-10-30 14:16:09.924025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:11.777 [2024-10-30 14:16:09.924042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-10-30 14:16:09.933934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.777 [2024-10-30 14:16:09.934006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.777 [2024-10-30 14:16:09.934023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.777 [2024-10-30 14:16:09.934031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.777 [2024-10-30 14:16:09.934038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:11.777 [2024-10-30 14:16:09.934055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-10-30 14:16:09.944058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.777 [2024-10-30 14:16:09.944133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.777 [2024-10-30 14:16:09.944149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.777 [2024-10-30 14:16:09.944157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.777 [2024-10-30 14:16:09.944164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:11.777 [2024-10-30 14:16:09.944181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-10-30 14:16:09.954042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.777 [2024-10-30 14:16:09.954107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.777 [2024-10-30 14:16:09.954126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.777 [2024-10-30 14:16:09.954133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.777 [2024-10-30 14:16:09.954140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:11.777 [2024-10-30 14:16:09.954158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-10-30 14:16:09.964105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.777 [2024-10-30 14:16:09.964174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.777 [2024-10-30 14:16:09.964191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.777 [2024-10-30 14:16:09.964198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.777 [2024-10-30 14:16:09.964210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:11.777 [2024-10-30 14:16:09.964228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-10-30 14:16:09.974069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.777 [2024-10-30 14:16:09.974145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.777 [2024-10-30 14:16:09.974162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.777 [2024-10-30 14:16:09.974169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.777 [2024-10-30 14:16:09.974176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:11.777 [2024-10-30 14:16:09.974193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-10-30 14:16:09.984141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.777 [2024-10-30 14:16:09.984217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.777 [2024-10-30 14:16:09.984234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.777 [2024-10-30 14:16:09.984241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.777 [2024-10-30 14:16:09.984248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:11.777 [2024-10-30 14:16:09.984264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-10-30 14:16:09.994093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.777 [2024-10-30 14:16:09.994153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.777 [2024-10-30 14:16:09.994175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.777 [2024-10-30 14:16:09.994182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.777 [2024-10-30 14:16:09.994188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:11.777 [2024-10-30 14:16:09.994207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-10-30 14:16:10.004208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.777 [2024-10-30 14:16:10.004271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.777 [2024-10-30 14:16:10.004294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.777 [2024-10-30 14:16:10.004302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.777 [2024-10-30 14:16:10.004309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:11.777 [2024-10-30 14:16:10.004328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-10-30 14:16:10.014232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.777 [2024-10-30 14:16:10.014362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.777 [2024-10-30 14:16:10.014381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.777 [2024-10-30 14:16:10.014389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.777 [2024-10-30 14:16:10.014395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:11.777 [2024-10-30 14:16:10.014412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-10-30 14:16:10.024250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.777 [2024-10-30 14:16:10.024326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.777 [2024-10-30 14:16:10.024344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.777 [2024-10-30 14:16:10.024351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.777 [2024-10-30 14:16:10.024359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:11.777 [2024-10-30 14:16:10.024376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-10-30 14:16:10.034117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.777 [2024-10-30 14:16:10.034183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.777 [2024-10-30 14:16:10.034201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.777 [2024-10-30 14:16:10.034209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.777 [2024-10-30 14:16:10.034217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:11.777 [2024-10-30 14:16:10.034235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-10-30 14:16:10.044306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.777 [2024-10-30 14:16:10.044387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.777 [2024-10-30 14:16:10.044407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.777 [2024-10-30 14:16:10.044415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.777 [2024-10-30 14:16:10.044422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:11.777 [2024-10-30 14:16:10.044440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-10-30 14:16:10.054440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.778 [2024-10-30 14:16:10.054518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.778 [2024-10-30 14:16:10.054544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.778 [2024-10-30 14:16:10.054552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.778 [2024-10-30 14:16:10.054558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:11.778 [2024-10-30 14:16:10.054576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-10-30 14:16:10.064430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.778 [2024-10-30 14:16:10.064541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.778 [2024-10-30 14:16:10.064580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.778 [2024-10-30 14:16:10.064590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.778 [2024-10-30 14:16:10.064597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:11.778 [2024-10-30 14:16:10.064623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:11.778 qpair failed and we were unable to recover it. 00:29:12.039 [2024-10-30 14:16:10.074413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.039 [2024-10-30 14:16:10.074476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.039 [2024-10-30 14:16:10.074498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.039 [2024-10-30 14:16:10.074506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.039 [2024-10-30 14:16:10.074514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.039 [2024-10-30 14:16:10.074533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.039 qpair failed and we were unable to recover it. 00:29:12.039 [2024-10-30 14:16:10.084409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.039 [2024-10-30 14:16:10.084470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.039 [2024-10-30 14:16:10.084489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.039 [2024-10-30 14:16:10.084498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.039 [2024-10-30 14:16:10.084505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.040 [2024-10-30 14:16:10.084523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.040 qpair failed and we were unable to recover it. 00:29:12.040 [2024-10-30 14:16:10.094380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.040 [2024-10-30 14:16:10.094444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.040 [2024-10-30 14:16:10.094463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.040 [2024-10-30 14:16:10.094471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.040 [2024-10-30 14:16:10.094486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.040 [2024-10-30 14:16:10.094505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.040 qpair failed and we were unable to recover it. 00:29:12.040 [2024-10-30 14:16:10.104543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.040 [2024-10-30 14:16:10.104654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.040 [2024-10-30 14:16:10.104674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.040 [2024-10-30 14:16:10.104683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.040 [2024-10-30 14:16:10.104689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.040 [2024-10-30 14:16:10.104708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.040 qpair failed and we were unable to recover it. 00:29:12.040 [2024-10-30 14:16:10.114467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.040 [2024-10-30 14:16:10.114563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.040 [2024-10-30 14:16:10.114581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.040 [2024-10-30 14:16:10.114589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.040 [2024-10-30 14:16:10.114595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.040 [2024-10-30 14:16:10.114614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.040 qpair failed and we were unable to recover it. 00:29:12.040 [2024-10-30 14:16:10.124476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.040 [2024-10-30 14:16:10.124534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.040 [2024-10-30 14:16:10.124552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.040 [2024-10-30 14:16:10.124560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.040 [2024-10-30 14:16:10.124567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.040 [2024-10-30 14:16:10.124584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.040 qpair failed and we were unable to recover it. 00:29:12.040 [2024-10-30 14:16:10.134521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.040 [2024-10-30 14:16:10.134641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.040 [2024-10-30 14:16:10.134658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.040 [2024-10-30 14:16:10.134667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.040 [2024-10-30 14:16:10.134674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.040 [2024-10-30 14:16:10.134691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.040 qpair failed and we were unable to recover it. 00:29:12.040 [2024-10-30 14:16:10.144576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.040 [2024-10-30 14:16:10.144641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.040 [2024-10-30 14:16:10.144659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.040 [2024-10-30 14:16:10.144667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.040 [2024-10-30 14:16:10.144674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.040 [2024-10-30 14:16:10.144692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.040 qpair failed and we were unable to recover it. 00:29:12.040 [2024-10-30 14:16:10.154584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.040 [2024-10-30 14:16:10.154647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.040 [2024-10-30 14:16:10.154667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.040 [2024-10-30 14:16:10.154675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.040 [2024-10-30 14:16:10.154681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.040 [2024-10-30 14:16:10.154700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.040 qpair failed and we were unable to recover it. 00:29:12.040 [2024-10-30 14:16:10.164621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.040 [2024-10-30 14:16:10.164703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.040 [2024-10-30 14:16:10.164721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.040 [2024-10-30 14:16:10.164729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.040 [2024-10-30 14:16:10.164736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.040 [2024-10-30 14:16:10.164761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.040 qpair failed and we were unable to recover it. 00:29:12.040 [2024-10-30 14:16:10.174649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.040 [2024-10-30 14:16:10.174771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.040 [2024-10-30 14:16:10.174789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.040 [2024-10-30 14:16:10.174797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.040 [2024-10-30 14:16:10.174804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.040 [2024-10-30 14:16:10.174824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.040 qpair failed and we were unable to recover it. 00:29:12.040 [2024-10-30 14:16:10.184692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.040 [2024-10-30 14:16:10.184773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.040 [2024-10-30 14:16:10.184791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.040 [2024-10-30 14:16:10.184798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.040 [2024-10-30 14:16:10.184805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.040 [2024-10-30 14:16:10.184823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.040 qpair failed and we were unable to recover it. 00:29:12.040 [2024-10-30 14:16:10.194607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.040 [2024-10-30 14:16:10.194675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.040 [2024-10-30 14:16:10.194693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.040 [2024-10-30 14:16:10.194700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.040 [2024-10-30 14:16:10.194707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.040 [2024-10-30 14:16:10.194724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.040 qpair failed and we were unable to recover it. 00:29:12.040 [2024-10-30 14:16:10.204778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.040 [2024-10-30 14:16:10.204884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.040 [2024-10-30 14:16:10.204903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.040 [2024-10-30 14:16:10.204910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.040 [2024-10-30 14:16:10.204917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.040 [2024-10-30 14:16:10.204935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.040 qpair failed and we were unable to recover it. 00:29:12.040 [2024-10-30 14:16:10.214802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.040 [2024-10-30 14:16:10.214868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.040 [2024-10-30 14:16:10.214885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.040 [2024-10-30 14:16:10.214893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.040 [2024-10-30 14:16:10.214899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.040 [2024-10-30 14:16:10.214916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.040 qpair failed and we were unable to recover it. 00:29:12.040 [2024-10-30 14:16:10.224709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.040 [2024-10-30 14:16:10.224794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.040 [2024-10-30 14:16:10.224812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.041 [2024-10-30 14:16:10.224825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.041 [2024-10-30 14:16:10.224832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.041 [2024-10-30 14:16:10.224850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.041 qpair failed and we were unable to recover it. 00:29:12.041 [2024-10-30 14:16:10.234849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.041 [2024-10-30 14:16:10.234918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.041 [2024-10-30 14:16:10.234936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.041 [2024-10-30 14:16:10.234943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.041 [2024-10-30 14:16:10.234949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.041 [2024-10-30 14:16:10.234967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.041 qpair failed and we were unable to recover it. 00:29:12.041 [2024-10-30 14:16:10.244804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.041 [2024-10-30 14:16:10.244881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.041 [2024-10-30 14:16:10.244898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.041 [2024-10-30 14:16:10.244905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.041 [2024-10-30 14:16:10.244912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.041 [2024-10-30 14:16:10.244929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.041 qpair failed and we were unable to recover it. 00:29:12.041 [2024-10-30 14:16:10.254870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.041 [2024-10-30 14:16:10.254936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.041 [2024-10-30 14:16:10.254954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.041 [2024-10-30 14:16:10.254961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.041 [2024-10-30 14:16:10.254967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.041 [2024-10-30 14:16:10.254984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.041 qpair failed and we were unable to recover it. 00:29:12.041 [2024-10-30 14:16:10.264850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.041 [2024-10-30 14:16:10.264915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.041 [2024-10-30 14:16:10.264933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.041 [2024-10-30 14:16:10.264941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.041 [2024-10-30 14:16:10.264947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.041 [2024-10-30 14:16:10.264970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.041 qpair failed and we were unable to recover it. 00:29:12.041 [2024-10-30 14:16:10.274886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.041 [2024-10-30 14:16:10.274951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.041 [2024-10-30 14:16:10.274969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.041 [2024-10-30 14:16:10.274976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.041 [2024-10-30 14:16:10.274982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.041 [2024-10-30 14:16:10.274999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.041 qpair failed and we were unable to recover it. 00:29:12.041 [2024-10-30 14:16:10.284986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.041 [2024-10-30 14:16:10.285046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.041 [2024-10-30 14:16:10.285063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.041 [2024-10-30 14:16:10.285071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.041 [2024-10-30 14:16:10.285077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.041 [2024-10-30 14:16:10.285094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.041 qpair failed and we were unable to recover it. 00:29:12.041 [2024-10-30 14:16:10.295071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.041 [2024-10-30 14:16:10.295182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.041 [2024-10-30 14:16:10.295199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.041 [2024-10-30 14:16:10.295206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.041 [2024-10-30 14:16:10.295213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.041 [2024-10-30 14:16:10.295230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.041 qpair failed and we were unable to recover it. 00:29:12.041 [2024-10-30 14:16:10.305084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.041 [2024-10-30 14:16:10.305150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.041 [2024-10-30 14:16:10.305166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.041 [2024-10-30 14:16:10.305174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.041 [2024-10-30 14:16:10.305180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.041 [2024-10-30 14:16:10.305197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.041 qpair failed and we were unable to recover it. 00:29:12.041 [2024-10-30 14:16:10.315039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.041 [2024-10-30 14:16:10.315103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.041 [2024-10-30 14:16:10.315121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.041 [2024-10-30 14:16:10.315129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.041 [2024-10-30 14:16:10.315136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.041 [2024-10-30 14:16:10.315153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.041 qpair failed and we were unable to recover it. 00:29:12.041 [2024-10-30 14:16:10.325088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.041 [2024-10-30 14:16:10.325176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.041 [2024-10-30 14:16:10.325194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.041 [2024-10-30 14:16:10.325201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.041 [2024-10-30 14:16:10.325208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.041 [2024-10-30 14:16:10.325226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.041 qpair failed and we were unable to recover it. 00:29:12.041 [2024-10-30 14:16:10.335131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.041 [2024-10-30 14:16:10.335232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.041 [2024-10-30 14:16:10.335250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.041 [2024-10-30 14:16:10.335258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.041 [2024-10-30 14:16:10.335265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.041 [2024-10-30 14:16:10.335282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.041 qpair failed and we were unable to recover it. 00:29:12.304 [2024-10-30 14:16:10.345204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.304 [2024-10-30 14:16:10.345278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.304 [2024-10-30 14:16:10.345295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.304 [2024-10-30 14:16:10.345303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.304 [2024-10-30 14:16:10.345310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.304 [2024-10-30 14:16:10.345327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.304 qpair failed and we were unable to recover it. 00:29:12.304 [2024-10-30 14:16:10.355067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.304 [2024-10-30 14:16:10.355135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.304 [2024-10-30 14:16:10.355152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.304 [2024-10-30 14:16:10.355165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.304 [2024-10-30 14:16:10.355172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.304 [2024-10-30 14:16:10.355189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.304 qpair failed and we were unable to recover it. 00:29:12.304 [2024-10-30 14:16:10.365229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.304 [2024-10-30 14:16:10.365302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.304 [2024-10-30 14:16:10.365318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.304 [2024-10-30 14:16:10.365326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.304 [2024-10-30 14:16:10.365333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.304 [2024-10-30 14:16:10.365349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.304 qpair failed and we were unable to recover it. 00:29:12.304 [2024-10-30 14:16:10.375229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.304 [2024-10-30 14:16:10.375299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.304 [2024-10-30 14:16:10.375316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.304 [2024-10-30 14:16:10.375324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.304 [2024-10-30 14:16:10.375331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.304 [2024-10-30 14:16:10.375348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.304 qpair failed and we were unable to recover it. 00:29:12.304 [2024-10-30 14:16:10.385280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.304 [2024-10-30 14:16:10.385347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.304 [2024-10-30 14:16:10.385365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.304 [2024-10-30 14:16:10.385373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.304 [2024-10-30 14:16:10.385379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.304 [2024-10-30 14:16:10.385397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.304 qpair failed and we were unable to recover it. 00:29:12.304 [2024-10-30 14:16:10.395279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.304 [2024-10-30 14:16:10.395334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.304 [2024-10-30 14:16:10.395352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.304 [2024-10-30 14:16:10.395360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.304 [2024-10-30 14:16:10.395366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.304 [2024-10-30 14:16:10.395389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.304 qpair failed and we were unable to recover it. 00:29:12.304 [2024-10-30 14:16:10.405340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.304 [2024-10-30 14:16:10.405412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.304 [2024-10-30 14:16:10.405431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.304 [2024-10-30 14:16:10.405438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.304 [2024-10-30 14:16:10.405445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.304 [2024-10-30 14:16:10.405462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.304 qpair failed and we were unable to recover it. 00:29:12.305 [2024-10-30 14:16:10.415356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.305 [2024-10-30 14:16:10.415432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.305 [2024-10-30 14:16:10.415453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.305 [2024-10-30 14:16:10.415460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.305 [2024-10-30 14:16:10.415467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.305 [2024-10-30 14:16:10.415486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.305 qpair failed and we were unable to recover it. 00:29:12.305 [2024-10-30 14:16:10.425435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.305 [2024-10-30 14:16:10.425507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.305 [2024-10-30 14:16:10.425535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.305 [2024-10-30 14:16:10.425543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.305 [2024-10-30 14:16:10.425549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.305 [2024-10-30 14:16:10.425571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.305 qpair failed and we were unable to recover it. 00:29:12.305 [2024-10-30 14:16:10.435404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.305 [2024-10-30 14:16:10.435479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.305 [2024-10-30 14:16:10.435517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.305 [2024-10-30 14:16:10.435527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.305 [2024-10-30 14:16:10.435535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.305 [2024-10-30 14:16:10.435560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.305 qpair failed and we were unable to recover it. 00:29:12.305 [2024-10-30 14:16:10.445491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.305 [2024-10-30 14:16:10.445560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.305 [2024-10-30 14:16:10.445598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.305 [2024-10-30 14:16:10.445610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.305 [2024-10-30 14:16:10.445618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.305 [2024-10-30 14:16:10.445644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.305 qpair failed and we were unable to recover it. 00:29:12.305 [2024-10-30 14:16:10.455507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.305 [2024-10-30 14:16:10.455574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.305 [2024-10-30 14:16:10.455595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.305 [2024-10-30 14:16:10.455603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.305 [2024-10-30 14:16:10.455610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.305 [2024-10-30 14:16:10.455630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.305 qpair failed and we were unable to recover it. 00:29:12.305 [2024-10-30 14:16:10.465454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.305 [2024-10-30 14:16:10.465554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.305 [2024-10-30 14:16:10.465572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.305 [2024-10-30 14:16:10.465581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.305 [2024-10-30 14:16:10.465588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.305 [2024-10-30 14:16:10.465607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.305 qpair failed and we were unable to recover it. 00:29:12.305 [2024-10-30 14:16:10.475565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.305 [2024-10-30 14:16:10.475636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.305 [2024-10-30 14:16:10.475655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.305 [2024-10-30 14:16:10.475663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.305 [2024-10-30 14:16:10.475670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.305 [2024-10-30 14:16:10.475688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.305 qpair failed and we were unable to recover it. 00:29:12.305 [2024-10-30 14:16:10.485584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.305 [2024-10-30 14:16:10.485648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.305 [2024-10-30 14:16:10.485673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.305 [2024-10-30 14:16:10.485680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.305 [2024-10-30 14:16:10.485687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.305 [2024-10-30 14:16:10.485705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.305 qpair failed and we were unable to recover it. 00:29:12.305 [2024-10-30 14:16:10.495631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.305 [2024-10-30 14:16:10.495755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.305 [2024-10-30 14:16:10.495775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.305 [2024-10-30 14:16:10.495782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.305 [2024-10-30 14:16:10.495789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.305 [2024-10-30 14:16:10.495807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.305 qpair failed and we were unable to recover it. 00:29:12.305 [2024-10-30 14:16:10.505703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.305 [2024-10-30 14:16:10.505781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.305 [2024-10-30 14:16:10.505799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.305 [2024-10-30 14:16:10.505806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.305 [2024-10-30 14:16:10.505813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.305 [2024-10-30 14:16:10.505831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.305 qpair failed and we were unable to recover it. 00:29:12.305 [2024-10-30 14:16:10.515715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.305 [2024-10-30 14:16:10.515787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.305 [2024-10-30 14:16:10.515805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.305 [2024-10-30 14:16:10.515812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.305 [2024-10-30 14:16:10.515818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.305 [2024-10-30 14:16:10.515836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.305 qpair failed and we were unable to recover it. 00:29:12.305 [2024-10-30 14:16:10.525729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.305 [2024-10-30 14:16:10.525827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.305 [2024-10-30 14:16:10.525845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.305 [2024-10-30 14:16:10.525853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.305 [2024-10-30 14:16:10.525866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.305 [2024-10-30 14:16:10.525885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.305 qpair failed and we were unable to recover it. 00:29:12.305 [2024-10-30 14:16:10.535626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.305 [2024-10-30 14:16:10.535697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.305 [2024-10-30 14:16:10.535715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.305 [2024-10-30 14:16:10.535722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.305 [2024-10-30 14:16:10.535729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.305 [2024-10-30 14:16:10.535755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.305 qpair failed and we were unable to recover it. 00:29:12.305 [2024-10-30 14:16:10.545742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.305 [2024-10-30 14:16:10.545853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.305 [2024-10-30 14:16:10.545871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.305 [2024-10-30 14:16:10.545879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.305 [2024-10-30 14:16:10.545886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.306 [2024-10-30 14:16:10.545904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-10-30 14:16:10.555822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.306 [2024-10-30 14:16:10.555909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.306 [2024-10-30 14:16:10.555928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.306 [2024-10-30 14:16:10.555936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.306 [2024-10-30 14:16:10.555942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.306 [2024-10-30 14:16:10.555961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-10-30 14:16:10.565811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.306 [2024-10-30 14:16:10.565867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.306 [2024-10-30 14:16:10.565885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.306 [2024-10-30 14:16:10.565893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.306 [2024-10-30 14:16:10.565899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.306 [2024-10-30 14:16:10.565916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-10-30 14:16:10.575836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.306 [2024-10-30 14:16:10.575905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.306 [2024-10-30 14:16:10.575923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.306 [2024-10-30 14:16:10.575930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.306 [2024-10-30 14:16:10.575937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.306 [2024-10-30 14:16:10.575954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-10-30 14:16:10.585903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.306 [2024-10-30 14:16:10.585980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.306 [2024-10-30 14:16:10.585997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.306 [2024-10-30 14:16:10.586005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.306 [2024-10-30 14:16:10.586011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.306 [2024-10-30 14:16:10.586029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-10-30 14:16:10.595912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.306 [2024-10-30 14:16:10.595985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.306 [2024-10-30 14:16:10.596003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.306 [2024-10-30 14:16:10.596011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.306 [2024-10-30 14:16:10.596017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.306 [2024-10-30 14:16:10.596035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.568 [2024-10-30 14:16:10.605923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.568 [2024-10-30 14:16:10.605995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.568 [2024-10-30 14:16:10.606013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.568 [2024-10-30 14:16:10.606020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.568 [2024-10-30 14:16:10.606027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.568 [2024-10-30 14:16:10.606044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-10-30 14:16:10.615950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.568 [2024-10-30 14:16:10.616018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.568 [2024-10-30 14:16:10.616040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.568 [2024-10-30 14:16:10.616048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.568 [2024-10-30 14:16:10.616054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.568 [2024-10-30 14:16:10.616071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-10-30 14:16:10.626026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.568 [2024-10-30 14:16:10.626100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.568 [2024-10-30 14:16:10.626118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.568 [2024-10-30 14:16:10.626128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.568 [2024-10-30 14:16:10.626134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.568 [2024-10-30 14:16:10.626152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-10-30 14:16:10.636028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.568 [2024-10-30 14:16:10.636120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.568 [2024-10-30 14:16:10.636137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.568 [2024-10-30 14:16:10.636144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.568 [2024-10-30 14:16:10.636151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.568 [2024-10-30 14:16:10.636168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-10-30 14:16:10.646064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.568 [2024-10-30 14:16:10.646135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.568 [2024-10-30 14:16:10.646153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.568 [2024-10-30 14:16:10.646161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.568 [2024-10-30 14:16:10.646167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.568 [2024-10-30 14:16:10.646185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-10-30 14:16:10.656086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.568 [2024-10-30 14:16:10.656203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.568 [2024-10-30 14:16:10.656221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.569 [2024-10-30 14:16:10.656229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.569 [2024-10-30 14:16:10.656240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.569 [2024-10-30 14:16:10.656258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-10-30 14:16:10.666156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.569 [2024-10-30 14:16:10.666231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.569 [2024-10-30 14:16:10.666249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.569 [2024-10-30 14:16:10.666257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.569 [2024-10-30 14:16:10.666264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.569 [2024-10-30 14:16:10.666281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-10-30 14:16:10.676145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.569 [2024-10-30 14:16:10.676228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.569 [2024-10-30 14:16:10.676247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.569 [2024-10-30 14:16:10.676255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.569 [2024-10-30 14:16:10.676261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.569 [2024-10-30 14:16:10.676279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-10-30 14:16:10.686187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.569 [2024-10-30 14:16:10.686284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.569 [2024-10-30 14:16:10.686302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.569 [2024-10-30 14:16:10.686310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.569 [2024-10-30 14:16:10.686316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.569 [2024-10-30 14:16:10.686333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-10-30 14:16:10.696231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.569 [2024-10-30 14:16:10.696300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.569 [2024-10-30 14:16:10.696318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.569 [2024-10-30 14:16:10.696325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.569 [2024-10-30 14:16:10.696332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.569 [2024-10-30 14:16:10.696349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-10-30 14:16:10.706256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.569 [2024-10-30 14:16:10.706377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.569 [2024-10-30 14:16:10.706395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.569 [2024-10-30 14:16:10.706403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.569 [2024-10-30 14:16:10.706410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.569 [2024-10-30 14:16:10.706427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-10-30 14:16:10.716244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.569 [2024-10-30 14:16:10.716354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.569 [2024-10-30 14:16:10.716372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.569 [2024-10-30 14:16:10.716379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.569 [2024-10-30 14:16:10.716386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.569 [2024-10-30 14:16:10.716404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-10-30 14:16:10.726268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.569 [2024-10-30 14:16:10.726328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.569 [2024-10-30 14:16:10.726357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.569 [2024-10-30 14:16:10.726365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.569 [2024-10-30 14:16:10.726371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.569 [2024-10-30 14:16:10.726393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-10-30 14:16:10.736322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.569 [2024-10-30 14:16:10.736391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.569 [2024-10-30 14:16:10.736411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.569 [2024-10-30 14:16:10.736418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.569 [2024-10-30 14:16:10.736424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.569 [2024-10-30 14:16:10.736443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-10-30 14:16:10.746255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.569 [2024-10-30 14:16:10.746326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.569 [2024-10-30 14:16:10.746347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.569 [2024-10-30 14:16:10.746355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.569 [2024-10-30 14:16:10.746362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.569 [2024-10-30 14:16:10.746382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-10-30 14:16:10.756394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.569 [2024-10-30 14:16:10.756485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.569 [2024-10-30 14:16:10.756505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.569 [2024-10-30 14:16:10.756513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.569 [2024-10-30 14:16:10.756520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.569 [2024-10-30 14:16:10.756539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-10-30 14:16:10.766406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.569 [2024-10-30 14:16:10.766481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.569 [2024-10-30 14:16:10.766520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.569 [2024-10-30 14:16:10.766529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.570 [2024-10-30 14:16:10.766537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.570 [2024-10-30 14:16:10.766562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-10-30 14:16:10.776412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.570 [2024-10-30 14:16:10.776490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.570 [2024-10-30 14:16:10.776528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.570 [2024-10-30 14:16:10.776537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.570 [2024-10-30 14:16:10.776545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.570 [2024-10-30 14:16:10.776570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-10-30 14:16:10.786518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.570 [2024-10-30 14:16:10.786609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.570 [2024-10-30 14:16:10.786630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.570 [2024-10-30 14:16:10.786644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.570 [2024-10-30 14:16:10.786651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.570 [2024-10-30 14:16:10.786670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-10-30 14:16:10.796407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.570 [2024-10-30 14:16:10.796468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.570 [2024-10-30 14:16:10.796492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.570 [2024-10-30 14:16:10.796499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.570 [2024-10-30 14:16:10.796506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.570 [2024-10-30 14:16:10.796526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-10-30 14:16:10.806535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.570 [2024-10-30 14:16:10.806635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.570 [2024-10-30 14:16:10.806655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.570 [2024-10-30 14:16:10.806663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.570 [2024-10-30 14:16:10.806669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.570 [2024-10-30 14:16:10.806689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-10-30 14:16:10.816555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.570 [2024-10-30 14:16:10.816622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.570 [2024-10-30 14:16:10.816639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.570 [2024-10-30 14:16:10.816646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.570 [2024-10-30 14:16:10.816653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.570 [2024-10-30 14:16:10.816671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-10-30 14:16:10.826639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.570 [2024-10-30 14:16:10.826705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.570 [2024-10-30 14:16:10.826723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.570 [2024-10-30 14:16:10.826731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.570 [2024-10-30 14:16:10.826737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.570 [2024-10-30 14:16:10.826768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-10-30 14:16:10.836624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.570 [2024-10-30 14:16:10.836686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.570 [2024-10-30 14:16:10.836704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.570 [2024-10-30 14:16:10.836711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.570 [2024-10-30 14:16:10.836718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.570 [2024-10-30 14:16:10.836736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-10-30 14:16:10.846663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.570 [2024-10-30 14:16:10.846727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.570 [2024-10-30 14:16:10.846744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.570 [2024-10-30 14:16:10.846758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.570 [2024-10-30 14:16:10.846765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.570 [2024-10-30 14:16:10.846784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-10-30 14:16:10.856695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.570 [2024-10-30 14:16:10.856772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.570 [2024-10-30 14:16:10.856792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.570 [2024-10-30 14:16:10.856799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.570 [2024-10-30 14:16:10.856805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.570 [2024-10-30 14:16:10.856823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.833 [2024-10-30 14:16:10.866758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.833 [2024-10-30 14:16:10.866869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.833 [2024-10-30 14:16:10.866888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.833 [2024-10-30 14:16:10.866896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.833 [2024-10-30 14:16:10.866903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.833 [2024-10-30 14:16:10.866921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.833 qpair failed and we were unable to recover it. 00:29:12.833 [2024-10-30 14:16:10.876754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.833 [2024-10-30 14:16:10.876819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.833 [2024-10-30 14:16:10.876838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.833 [2024-10-30 14:16:10.876846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.833 [2024-10-30 14:16:10.876852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.833 [2024-10-30 14:16:10.876870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.833 qpair failed and we were unable to recover it. 00:29:12.833 [2024-10-30 14:16:10.886794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.833 [2024-10-30 14:16:10.886862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.833 [2024-10-30 14:16:10.886880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.833 [2024-10-30 14:16:10.886887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.833 [2024-10-30 14:16:10.886894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.833 [2024-10-30 14:16:10.886912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.833 qpair failed and we were unable to recover it. 00:29:12.833 [2024-10-30 14:16:10.896829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.833 [2024-10-30 14:16:10.896943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.833 [2024-10-30 14:16:10.896960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.833 [2024-10-30 14:16:10.896967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.833 [2024-10-30 14:16:10.896974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.833 [2024-10-30 14:16:10.896992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.833 qpair failed and we were unable to recover it. 00:29:12.833 [2024-10-30 14:16:10.906878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.833 [2024-10-30 14:16:10.906953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.833 [2024-10-30 14:16:10.906970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.833 [2024-10-30 14:16:10.906977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.833 [2024-10-30 14:16:10.906983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.833 [2024-10-30 14:16:10.907001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.833 qpair failed and we were unable to recover it. 00:29:12.833 [2024-10-30 14:16:10.916772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.833 [2024-10-30 14:16:10.916830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.833 [2024-10-30 14:16:10.916856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.833 [2024-10-30 14:16:10.916864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.833 [2024-10-30 14:16:10.916871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.833 [2024-10-30 14:16:10.916890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.833 qpair failed and we were unable to recover it. 00:29:12.833 [2024-10-30 14:16:10.926779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.833 [2024-10-30 14:16:10.926834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.833 [2024-10-30 14:16:10.926854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.833 [2024-10-30 14:16:10.926861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.833 [2024-10-30 14:16:10.926867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.834 [2024-10-30 14:16:10.926885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.834 qpair failed and we were unable to recover it. 00:29:12.834 [2024-10-30 14:16:10.936897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.834 [2024-10-30 14:16:10.936962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.834 [2024-10-30 14:16:10.936980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.834 [2024-10-30 14:16:10.936987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.834 [2024-10-30 14:16:10.936993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.834 [2024-10-30 14:16:10.937011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.834 qpair failed and we were unable to recover it. 00:29:12.834 [2024-10-30 14:16:10.946982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.834 [2024-10-30 14:16:10.947046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.834 [2024-10-30 14:16:10.947064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.834 [2024-10-30 14:16:10.947071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.834 [2024-10-30 14:16:10.947078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.834 [2024-10-30 14:16:10.947096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.834 qpair failed and we were unable to recover it. 00:29:12.834 [2024-10-30 14:16:10.957017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.834 [2024-10-30 14:16:10.957099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.834 [2024-10-30 14:16:10.957118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.834 [2024-10-30 14:16:10.957125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.834 [2024-10-30 14:16:10.957132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.834 [2024-10-30 14:16:10.957156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.834 qpair failed and we were unable to recover it. 00:29:12.834 [2024-10-30 14:16:10.967032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.834 [2024-10-30 14:16:10.967112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.834 [2024-10-30 14:16:10.967130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.834 [2024-10-30 14:16:10.967137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.834 [2024-10-30 14:16:10.967144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.834 [2024-10-30 14:16:10.967162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.834 qpair failed and we were unable to recover it. 00:29:12.834 [2024-10-30 14:16:10.977060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.834 [2024-10-30 14:16:10.977140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.834 [2024-10-30 14:16:10.977157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.834 [2024-10-30 14:16:10.977165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.834 [2024-10-30 14:16:10.977172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.834 [2024-10-30 14:16:10.977190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.834 qpair failed and we were unable to recover it. 00:29:12.834 [2024-10-30 14:16:10.987083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.834 [2024-10-30 14:16:10.987155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.834 [2024-10-30 14:16:10.987173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.834 [2024-10-30 14:16:10.987180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.834 [2024-10-30 14:16:10.987187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.834 [2024-10-30 14:16:10.987205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.834 qpair failed and we were unable to recover it. 00:29:12.834 [2024-10-30 14:16:10.997099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.834 [2024-10-30 14:16:10.997193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.834 [2024-10-30 14:16:10.997211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.834 [2024-10-30 14:16:10.997218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.834 [2024-10-30 14:16:10.997225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.834 [2024-10-30 14:16:10.997242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.834 qpair failed and we were unable to recover it. 00:29:12.834 [2024-10-30 14:16:11.007138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.834 [2024-10-30 14:16:11.007206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.834 [2024-10-30 14:16:11.007224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.834 [2024-10-30 14:16:11.007232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.834 [2024-10-30 14:16:11.007238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.834 [2024-10-30 14:16:11.007256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.834 qpair failed and we were unable to recover it. 00:29:12.834 [2024-10-30 14:16:11.017163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.834 [2024-10-30 14:16:11.017230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.834 [2024-10-30 14:16:11.017247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.834 [2024-10-30 14:16:11.017255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.834 [2024-10-30 14:16:11.017261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.834 [2024-10-30 14:16:11.017278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.834 qpair failed and we were unable to recover it. 00:29:12.834 [2024-10-30 14:16:11.027242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.834 [2024-10-30 14:16:11.027318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.834 [2024-10-30 14:16:11.027336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.834 [2024-10-30 14:16:11.027343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.834 [2024-10-30 14:16:11.027350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.834 [2024-10-30 14:16:11.027367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.834 qpair failed and we were unable to recover it. 00:29:12.834 [2024-10-30 14:16:11.037221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.834 [2024-10-30 14:16:11.037292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.834 [2024-10-30 14:16:11.037356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.834 [2024-10-30 14:16:11.037364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.834 [2024-10-30 14:16:11.037371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.834 [2024-10-30 14:16:11.037403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.834 qpair failed and we were unable to recover it. 00:29:12.834 [2024-10-30 14:16:11.047273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.834 [2024-10-30 14:16:11.047333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.834 [2024-10-30 14:16:11.047359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.834 [2024-10-30 14:16:11.047366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.834 [2024-10-30 14:16:11.047372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.834 [2024-10-30 14:16:11.047392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.834 qpair failed and we were unable to recover it. 00:29:12.834 [2024-10-30 14:16:11.057302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.834 [2024-10-30 14:16:11.057386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.834 [2024-10-30 14:16:11.057406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.834 [2024-10-30 14:16:11.057414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.834 [2024-10-30 14:16:11.057420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.834 [2024-10-30 14:16:11.057439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.834 qpair failed and we were unable to recover it. 00:29:12.834 [2024-10-30 14:16:11.067356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.835 [2024-10-30 14:16:11.067436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.835 [2024-10-30 14:16:11.067455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.835 [2024-10-30 14:16:11.067463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.835 [2024-10-30 14:16:11.067469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.835 [2024-10-30 14:16:11.067488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.835 qpair failed and we were unable to recover it. 00:29:12.835 [2024-10-30 14:16:11.077367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.835 [2024-10-30 14:16:11.077452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.835 [2024-10-30 14:16:11.077490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.835 [2024-10-30 14:16:11.077500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.835 [2024-10-30 14:16:11.077507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.835 [2024-10-30 14:16:11.077533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.835 qpair failed and we were unable to recover it. 00:29:12.835 [2024-10-30 14:16:11.087418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.835 [2024-10-30 14:16:11.087494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.835 [2024-10-30 14:16:11.087533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.835 [2024-10-30 14:16:11.087542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.835 [2024-10-30 14:16:11.087557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.835 [2024-10-30 14:16:11.087582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.835 qpair failed and we were unable to recover it. 00:29:12.835 [2024-10-30 14:16:11.097330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.835 [2024-10-30 14:16:11.097410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.835 [2024-10-30 14:16:11.097448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.835 [2024-10-30 14:16:11.097458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.835 [2024-10-30 14:16:11.097465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.835 [2024-10-30 14:16:11.097491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.835 qpair failed and we were unable to recover it. 00:29:12.835 [2024-10-30 14:16:11.107447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.835 [2024-10-30 14:16:11.107523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.835 [2024-10-30 14:16:11.107543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.835 [2024-10-30 14:16:11.107551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.835 [2024-10-30 14:16:11.107558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.835 [2024-10-30 14:16:11.107577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.835 qpair failed and we were unable to recover it. 00:29:12.835 [2024-10-30 14:16:11.117395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.835 [2024-10-30 14:16:11.117466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.835 [2024-10-30 14:16:11.117485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.835 [2024-10-30 14:16:11.117493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.835 [2024-10-30 14:16:11.117499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.835 [2024-10-30 14:16:11.117517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.835 qpair failed and we were unable to recover it. 00:29:12.835 [2024-10-30 14:16:11.127466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.835 [2024-10-30 14:16:11.127537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.835 [2024-10-30 14:16:11.127556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.835 [2024-10-30 14:16:11.127563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.835 [2024-10-30 14:16:11.127570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:12.835 [2024-10-30 14:16:11.127587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.835 qpair failed and we were unable to recover it. 00:29:13.097 [2024-10-30 14:16:11.137555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.097 [2024-10-30 14:16:11.137625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.097 [2024-10-30 14:16:11.137644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.097 [2024-10-30 14:16:11.137652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.097 [2024-10-30 14:16:11.137658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.097 [2024-10-30 14:16:11.137677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.097 qpair failed and we were unable to recover it. 00:29:13.097 [2024-10-30 14:16:11.147595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.097 [2024-10-30 14:16:11.147663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.097 [2024-10-30 14:16:11.147681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.097 [2024-10-30 14:16:11.147688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.097 [2024-10-30 14:16:11.147695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.097 [2024-10-30 14:16:11.147713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.097 qpair failed and we were unable to recover it. 00:29:13.097 [2024-10-30 14:16:11.157619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.097 [2024-10-30 14:16:11.157693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.097 [2024-10-30 14:16:11.157712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.097 [2024-10-30 14:16:11.157720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.097 [2024-10-30 14:16:11.157726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.098 [2024-10-30 14:16:11.157744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.098 qpair failed and we were unable to recover it. 00:29:13.098 [2024-10-30 14:16:11.167660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.098 [2024-10-30 14:16:11.167730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.098 [2024-10-30 14:16:11.167752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.098 [2024-10-30 14:16:11.167760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.098 [2024-10-30 14:16:11.167767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.098 [2024-10-30 14:16:11.167784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.098 qpair failed and we were unable to recover it. 00:29:13.098 [2024-10-30 14:16:11.177666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.098 [2024-10-30 14:16:11.177771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.098 [2024-10-30 14:16:11.177795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.098 [2024-10-30 14:16:11.177802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.098 [2024-10-30 14:16:11.177809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.098 [2024-10-30 14:16:11.177827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.098 qpair failed and we were unable to recover it. 00:29:13.098 [2024-10-30 14:16:11.187712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.098 [2024-10-30 14:16:11.187782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.098 [2024-10-30 14:16:11.187800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.098 [2024-10-30 14:16:11.187807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.098 [2024-10-30 14:16:11.187814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.098 [2024-10-30 14:16:11.187831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.098 qpair failed and we were unable to recover it. 00:29:13.098 [2024-10-30 14:16:11.197757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.098 [2024-10-30 14:16:11.197822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.098 [2024-10-30 14:16:11.197839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.098 [2024-10-30 14:16:11.197846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.098 [2024-10-30 14:16:11.197853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.098 [2024-10-30 14:16:11.197870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.098 qpair failed and we were unable to recover it. 00:29:13.098 [2024-10-30 14:16:11.207641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.098 [2024-10-30 14:16:11.207711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.098 [2024-10-30 14:16:11.207729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.098 [2024-10-30 14:16:11.207736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.098 [2024-10-30 14:16:11.207743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.098 [2024-10-30 14:16:11.207769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.098 qpair failed and we were unable to recover it. 00:29:13.098 [2024-10-30 14:16:11.217785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.098 [2024-10-30 14:16:11.217884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.098 [2024-10-30 14:16:11.217903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.098 [2024-10-30 14:16:11.217915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.098 [2024-10-30 14:16:11.217923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.098 [2024-10-30 14:16:11.217940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.098 qpair failed and we were unable to recover it. 00:29:13.098 [2024-10-30 14:16:11.227811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.098 [2024-10-30 14:16:11.227881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.098 [2024-10-30 14:16:11.227899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.098 [2024-10-30 14:16:11.227906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.098 [2024-10-30 14:16:11.227912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.098 [2024-10-30 14:16:11.227930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.098 qpair failed and we were unable to recover it. 00:29:13.098 [2024-10-30 14:16:11.237823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.098 [2024-10-30 14:16:11.237890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.098 [2024-10-30 14:16:11.237908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.098 [2024-10-30 14:16:11.237916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.098 [2024-10-30 14:16:11.237922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.098 [2024-10-30 14:16:11.237939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.098 qpair failed and we were unable to recover it. 00:29:13.098 [2024-10-30 14:16:11.247873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.098 [2024-10-30 14:16:11.247942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.098 [2024-10-30 14:16:11.247959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.098 [2024-10-30 14:16:11.247967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.098 [2024-10-30 14:16:11.247973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.098 [2024-10-30 14:16:11.247990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.098 qpair failed and we were unable to recover it. 00:29:13.098 [2024-10-30 14:16:11.257799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.098 [2024-10-30 14:16:11.257874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.098 [2024-10-30 14:16:11.257893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.098 [2024-10-30 14:16:11.257900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.098 [2024-10-30 14:16:11.257906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.098 [2024-10-30 14:16:11.257924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.098 qpair failed and we were unable to recover it. 00:29:13.098 [2024-10-30 14:16:11.267980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.098 [2024-10-30 14:16:11.268050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.098 [2024-10-30 14:16:11.268068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.098 [2024-10-30 14:16:11.268075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.098 [2024-10-30 14:16:11.268082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.099 [2024-10-30 14:16:11.268099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.099 qpair failed and we were unable to recover it. 00:29:13.099 [2024-10-30 14:16:11.277976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.099 [2024-10-30 14:16:11.278035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.099 [2024-10-30 14:16:11.278053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.099 [2024-10-30 14:16:11.278060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.099 [2024-10-30 14:16:11.278067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.099 [2024-10-30 14:16:11.278084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.099 qpair failed and we were unable to recover it. 00:29:13.099 [2024-10-30 14:16:11.287896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.099 [2024-10-30 14:16:11.287956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.099 [2024-10-30 14:16:11.287976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.099 [2024-10-30 14:16:11.287984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.099 [2024-10-30 14:16:11.287991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.099 [2024-10-30 14:16:11.288009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.099 qpair failed and we were unable to recover it. 00:29:13.099 [2024-10-30 14:16:11.297939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.099 [2024-10-30 14:16:11.298020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.099 [2024-10-30 14:16:11.298039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.099 [2024-10-30 14:16:11.298047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.099 [2024-10-30 14:16:11.298053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.099 [2024-10-30 14:16:11.298071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.099 qpair failed and we were unable to recover it. 00:29:13.099 [2024-10-30 14:16:11.308201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.099 [2024-10-30 14:16:11.308271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.099 [2024-10-30 14:16:11.308289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.099 [2024-10-30 14:16:11.308297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.099 [2024-10-30 14:16:11.308303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.099 [2024-10-30 14:16:11.308321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.099 qpair failed and we were unable to recover it. 00:29:13.099 [2024-10-30 14:16:11.317987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.099 [2024-10-30 14:16:11.318051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.099 [2024-10-30 14:16:11.318069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.099 [2024-10-30 14:16:11.318076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.099 [2024-10-30 14:16:11.318083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.099 [2024-10-30 14:16:11.318100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.099 qpair failed and we were unable to recover it. 00:29:13.099 [2024-10-30 14:16:11.328103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.099 [2024-10-30 14:16:11.328162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.099 [2024-10-30 14:16:11.328179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.099 [2024-10-30 14:16:11.328187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.099 [2024-10-30 14:16:11.328193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.099 [2024-10-30 14:16:11.328211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.099 qpair failed and we were unable to recover it. 00:29:13.099 [2024-10-30 14:16:11.338173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.099 [2024-10-30 14:16:11.338249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.099 [2024-10-30 14:16:11.338267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.099 [2024-10-30 14:16:11.338275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.099 [2024-10-30 14:16:11.338281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.099 [2024-10-30 14:16:11.338299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.099 qpair failed and we were unable to recover it. 00:29:13.099 [2024-10-30 14:16:11.348223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.099 [2024-10-30 14:16:11.348301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.099 [2024-10-30 14:16:11.348318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.099 [2024-10-30 14:16:11.348335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.099 [2024-10-30 14:16:11.348342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.099 [2024-10-30 14:16:11.348359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.099 qpair failed and we were unable to recover it. 00:29:13.099 [2024-10-30 14:16:11.358229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.099 [2024-10-30 14:16:11.358298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.099 [2024-10-30 14:16:11.358316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.099 [2024-10-30 14:16:11.358324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.099 [2024-10-30 14:16:11.358330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.099 [2024-10-30 14:16:11.358347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.099 qpair failed and we were unable to recover it. 00:29:13.099 [2024-10-30 14:16:11.368284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.099 [2024-10-30 14:16:11.368348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.099 [2024-10-30 14:16:11.368365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.099 [2024-10-30 14:16:11.368372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.099 [2024-10-30 14:16:11.368379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.099 [2024-10-30 14:16:11.368396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.099 qpair failed and we were unable to recover it. 00:29:13.099 [2024-10-30 14:16:11.378288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.099 [2024-10-30 14:16:11.378354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.099 [2024-10-30 14:16:11.378371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.099 [2024-10-30 14:16:11.378378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.099 [2024-10-30 14:16:11.378385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.100 [2024-10-30 14:16:11.378402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.100 qpair failed and we were unable to recover it. 00:29:13.100 [2024-10-30 14:16:11.388346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.100 [2024-10-30 14:16:11.388425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.100 [2024-10-30 14:16:11.388442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.100 [2024-10-30 14:16:11.388449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.100 [2024-10-30 14:16:11.388455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.100 [2024-10-30 14:16:11.388478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.100 qpair failed and we were unable to recover it. 00:29:13.363 [2024-10-30 14:16:11.398275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.363 [2024-10-30 14:16:11.398338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.363 [2024-10-30 14:16:11.398356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.363 [2024-10-30 14:16:11.398363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.363 [2024-10-30 14:16:11.398370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.363 [2024-10-30 14:16:11.398387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.363 qpair failed and we were unable to recover it. 00:29:13.363 [2024-10-30 14:16:11.408362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.363 [2024-10-30 14:16:11.408418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.363 [2024-10-30 14:16:11.408436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.363 [2024-10-30 14:16:11.408444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.363 [2024-10-30 14:16:11.408450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.363 [2024-10-30 14:16:11.408467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.363 qpair failed and we were unable to recover it. 00:29:13.363 [2024-10-30 14:16:11.418411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.363 [2024-10-30 14:16:11.418488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.363 [2024-10-30 14:16:11.418505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.363 [2024-10-30 14:16:11.418512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.363 [2024-10-30 14:16:11.418519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.363 [2024-10-30 14:16:11.418536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.363 qpair failed and we were unable to recover it. 00:29:13.363 [2024-10-30 14:16:11.428484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.363 [2024-10-30 14:16:11.428557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.363 [2024-10-30 14:16:11.428595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.363 [2024-10-30 14:16:11.428605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.363 [2024-10-30 14:16:11.428613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.363 [2024-10-30 14:16:11.428638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.363 qpair failed and we were unable to recover it. 00:29:13.363 [2024-10-30 14:16:11.438493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.363 [2024-10-30 14:16:11.438559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.363 [2024-10-30 14:16:11.438580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.363 [2024-10-30 14:16:11.438588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.363 [2024-10-30 14:16:11.438594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.363 [2024-10-30 14:16:11.438614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.363 qpair failed and we were unable to recover it. 00:29:13.363 [2024-10-30 14:16:11.448506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.363 [2024-10-30 14:16:11.448569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.363 [2024-10-30 14:16:11.448588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.363 [2024-10-30 14:16:11.448596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.363 [2024-10-30 14:16:11.448603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.363 [2024-10-30 14:16:11.448621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.363 qpair failed and we were unable to recover it. 00:29:13.363 [2024-10-30 14:16:11.458575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.363 [2024-10-30 14:16:11.458646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.363 [2024-10-30 14:16:11.458664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.363 [2024-10-30 14:16:11.458672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.363 [2024-10-30 14:16:11.458679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.363 [2024-10-30 14:16:11.458697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.363 qpair failed and we were unable to recover it. 00:29:13.363 [2024-10-30 14:16:11.468590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.363 [2024-10-30 14:16:11.468655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.363 [2024-10-30 14:16:11.468673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.363 [2024-10-30 14:16:11.468681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.363 [2024-10-30 14:16:11.468687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.363 [2024-10-30 14:16:11.468705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.363 qpair failed and we were unable to recover it. 00:29:13.363 [2024-10-30 14:16:11.478586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.363 [2024-10-30 14:16:11.478651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.363 [2024-10-30 14:16:11.478677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.363 [2024-10-30 14:16:11.478685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.363 [2024-10-30 14:16:11.478691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.363 [2024-10-30 14:16:11.478710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.363 qpair failed and we were unable to recover it. 00:29:13.363 [2024-10-30 14:16:11.488627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.363 [2024-10-30 14:16:11.488692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.363 [2024-10-30 14:16:11.488710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.363 [2024-10-30 14:16:11.488717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.364 [2024-10-30 14:16:11.488724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.364 [2024-10-30 14:16:11.488742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.364 qpair failed and we were unable to recover it. 00:29:13.364 [2024-10-30 14:16:11.498661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.364 [2024-10-30 14:16:11.498730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.364 [2024-10-30 14:16:11.498753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.364 [2024-10-30 14:16:11.498761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.364 [2024-10-30 14:16:11.498768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.364 [2024-10-30 14:16:11.498786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.364 qpair failed and we were unable to recover it. 00:29:13.364 [2024-10-30 14:16:11.508786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.364 [2024-10-30 14:16:11.508852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.364 [2024-10-30 14:16:11.508869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.364 [2024-10-30 14:16:11.508877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.364 [2024-10-30 14:16:11.508883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.364 [2024-10-30 14:16:11.508901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.364 qpair failed and we were unable to recover it. 00:29:13.364 [2024-10-30 14:16:11.518688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.364 [2024-10-30 14:16:11.518763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.364 [2024-10-30 14:16:11.518782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.364 [2024-10-30 14:16:11.518790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.364 [2024-10-30 14:16:11.518797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.364 [2024-10-30 14:16:11.518822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.364 qpair failed and we were unable to recover it. 00:29:13.364 [2024-10-30 14:16:11.528758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.364 [2024-10-30 14:16:11.528855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.364 [2024-10-30 14:16:11.528873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.364 [2024-10-30 14:16:11.528881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.364 [2024-10-30 14:16:11.528888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.364 [2024-10-30 14:16:11.528907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.364 qpair failed and we were unable to recover it. 00:29:13.364 [2024-10-30 14:16:11.538791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.364 [2024-10-30 14:16:11.538856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.364 [2024-10-30 14:16:11.538874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.364 [2024-10-30 14:16:11.538882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.364 [2024-10-30 14:16:11.538889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.364 [2024-10-30 14:16:11.538908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.364 qpair failed and we were unable to recover it. 00:29:13.364 [2024-10-30 14:16:11.548874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.364 [2024-10-30 14:16:11.548947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.364 [2024-10-30 14:16:11.548964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.364 [2024-10-30 14:16:11.548972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.364 [2024-10-30 14:16:11.548980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.364 [2024-10-30 14:16:11.548998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.364 qpair failed and we were unable to recover it. 00:29:13.364 [2024-10-30 14:16:11.558732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.364 [2024-10-30 14:16:11.558811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.364 [2024-10-30 14:16:11.558829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.364 [2024-10-30 14:16:11.558837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.364 [2024-10-30 14:16:11.558843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.364 [2024-10-30 14:16:11.558861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.364 qpair failed and we were unable to recover it. 00:29:13.364 [2024-10-30 14:16:11.568872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.364 [2024-10-30 14:16:11.568935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.364 [2024-10-30 14:16:11.568952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.364 [2024-10-30 14:16:11.568960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.364 [2024-10-30 14:16:11.568967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.364 [2024-10-30 14:16:11.568984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.364 qpair failed and we were unable to recover it. 00:29:13.364 [2024-10-30 14:16:11.578930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.364 [2024-10-30 14:16:11.578996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.364 [2024-10-30 14:16:11.579013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.364 [2024-10-30 14:16:11.579020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.364 [2024-10-30 14:16:11.579027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.364 [2024-10-30 14:16:11.579044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.364 qpair failed and we were unable to recover it. 00:29:13.364 [2024-10-30 14:16:11.588992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.364 [2024-10-30 14:16:11.589068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.364 [2024-10-30 14:16:11.589086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.364 [2024-10-30 14:16:11.589093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.364 [2024-10-30 14:16:11.589100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.364 [2024-10-30 14:16:11.589117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.364 qpair failed and we were unable to recover it. 00:29:13.364 [2024-10-30 14:16:11.598990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.364 [2024-10-30 14:16:11.599056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.364 [2024-10-30 14:16:11.599074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.364 [2024-10-30 14:16:11.599081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.364 [2024-10-30 14:16:11.599087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.364 [2024-10-30 14:16:11.599105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.364 qpair failed and we were unable to recover it. 00:29:13.364 [2024-10-30 14:16:11.609031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.364 [2024-10-30 14:16:11.609089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.364 [2024-10-30 14:16:11.609114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.364 [2024-10-30 14:16:11.609122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.364 [2024-10-30 14:16:11.609128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.364 [2024-10-30 14:16:11.609146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.364 qpair failed and we were unable to recover it. 00:29:13.364 [2024-10-30 14:16:11.619087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.364 [2024-10-30 14:16:11.619157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.364 [2024-10-30 14:16:11.619175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.364 [2024-10-30 14:16:11.619183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.364 [2024-10-30 14:16:11.619189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.364 [2024-10-30 14:16:11.619206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.364 qpair failed and we were unable to recover it. 00:29:13.364 [2024-10-30 14:16:11.629101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.364 [2024-10-30 14:16:11.629212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.364 [2024-10-30 14:16:11.629231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.364 [2024-10-30 14:16:11.629240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.365 [2024-10-30 14:16:11.629247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.365 [2024-10-30 14:16:11.629265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.365 qpair failed and we were unable to recover it. 00:29:13.365 [2024-10-30 14:16:11.639060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.365 [2024-10-30 14:16:11.639113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.365 [2024-10-30 14:16:11.639132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.365 [2024-10-30 14:16:11.639139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.365 [2024-10-30 14:16:11.639145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.365 [2024-10-30 14:16:11.639163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.365 qpair failed and we were unable to recover it. 00:29:13.365 [2024-10-30 14:16:11.649097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.365 [2024-10-30 14:16:11.649165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.365 [2024-10-30 14:16:11.649183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.365 [2024-10-30 14:16:11.649190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.365 [2024-10-30 14:16:11.649203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.365 [2024-10-30 14:16:11.649220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.365 qpair failed and we were unable to recover it. 00:29:13.365 [2024-10-30 14:16:11.659170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.365 [2024-10-30 14:16:11.659239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.365 [2024-10-30 14:16:11.659257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.365 [2024-10-30 14:16:11.659265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.365 [2024-10-30 14:16:11.659272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.365 [2024-10-30 14:16:11.659289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.365 qpair failed and we were unable to recover it. 00:29:13.627 [2024-10-30 14:16:11.669204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.627 [2024-10-30 14:16:11.669273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.627 [2024-10-30 14:16:11.669290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.627 [2024-10-30 14:16:11.669298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.627 [2024-10-30 14:16:11.669305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.627 [2024-10-30 14:16:11.669323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.627 qpair failed and we were unable to recover it. 00:29:13.627 [2024-10-30 14:16:11.679132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.627 [2024-10-30 14:16:11.679199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.627 [2024-10-30 14:16:11.679217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.627 [2024-10-30 14:16:11.679225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.627 [2024-10-30 14:16:11.679231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.627 [2024-10-30 14:16:11.679249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.627 qpair failed and we were unable to recover it. 00:29:13.627 [2024-10-30 14:16:11.689184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.627 [2024-10-30 14:16:11.689244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.627 [2024-10-30 14:16:11.689262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.627 [2024-10-30 14:16:11.689269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.627 [2024-10-30 14:16:11.689275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.627 [2024-10-30 14:16:11.689293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.627 qpair failed and we were unable to recover it. 00:29:13.627 [2024-10-30 14:16:11.699260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.627 [2024-10-30 14:16:11.699325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.627 [2024-10-30 14:16:11.699344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.627 [2024-10-30 14:16:11.699351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.627 [2024-10-30 14:16:11.699357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.627 [2024-10-30 14:16:11.699375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.627 qpair failed and we were unable to recover it. 00:29:13.627 [2024-10-30 14:16:11.709298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.627 [2024-10-30 14:16:11.709358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.627 [2024-10-30 14:16:11.709374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.627 [2024-10-30 14:16:11.709382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.627 [2024-10-30 14:16:11.709388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.627 [2024-10-30 14:16:11.709405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.627 qpair failed and we were unable to recover it. 00:29:13.627 [2024-10-30 14:16:11.719350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.627 [2024-10-30 14:16:11.719416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.627 [2024-10-30 14:16:11.719436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.627 [2024-10-30 14:16:11.719444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.627 [2024-10-30 14:16:11.719450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.627 [2024-10-30 14:16:11.719469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.627 qpair failed and we were unable to recover it. 00:29:13.627 [2024-10-30 14:16:11.729194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.627 [2024-10-30 14:16:11.729246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.627 [2024-10-30 14:16:11.729266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.627 [2024-10-30 14:16:11.729273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.627 [2024-10-30 14:16:11.729280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.627 [2024-10-30 14:16:11.729298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.627 qpair failed and we were unable to recover it. 00:29:13.627 [2024-10-30 14:16:11.739394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.627 [2024-10-30 14:16:11.739459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.627 [2024-10-30 14:16:11.739480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.627 [2024-10-30 14:16:11.739487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.627 [2024-10-30 14:16:11.739494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.627 [2024-10-30 14:16:11.739510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.627 qpair failed and we were unable to recover it. 00:29:13.627 [2024-10-30 14:16:11.749423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.627 [2024-10-30 14:16:11.749504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.627 [2024-10-30 14:16:11.749536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.627 [2024-10-30 14:16:11.749545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.627 [2024-10-30 14:16:11.749552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.627 [2024-10-30 14:16:11.749575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.627 qpair failed and we were unable to recover it. 00:29:13.627 [2024-10-30 14:16:11.759415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.627 [2024-10-30 14:16:11.759475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.627 [2024-10-30 14:16:11.759507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.627 [2024-10-30 14:16:11.759516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.627 [2024-10-30 14:16:11.759524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.627 [2024-10-30 14:16:11.759547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.627 qpair failed and we were unable to recover it. 00:29:13.627 [2024-10-30 14:16:11.769415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.627 [2024-10-30 14:16:11.769466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.627 [2024-10-30 14:16:11.769484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.627 [2024-10-30 14:16:11.769491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.627 [2024-10-30 14:16:11.769497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.627 [2024-10-30 14:16:11.769514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.627 qpair failed and we were unable to recover it. 00:29:13.627 [2024-10-30 14:16:11.779452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.627 [2024-10-30 14:16:11.779512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.627 [2024-10-30 14:16:11.779528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.627 [2024-10-30 14:16:11.779541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.627 [2024-10-30 14:16:11.779548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.628 [2024-10-30 14:16:11.779564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.628 qpair failed and we were unable to recover it. 00:29:13.628 [2024-10-30 14:16:11.789525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.628 [2024-10-30 14:16:11.789590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.628 [2024-10-30 14:16:11.789605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.628 [2024-10-30 14:16:11.789612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.628 [2024-10-30 14:16:11.789619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.628 [2024-10-30 14:16:11.789634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.628 qpair failed and we were unable to recover it. 00:29:13.628 [2024-10-30 14:16:11.799526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.628 [2024-10-30 14:16:11.799583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.628 [2024-10-30 14:16:11.799598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.628 [2024-10-30 14:16:11.799605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.628 [2024-10-30 14:16:11.799612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.628 [2024-10-30 14:16:11.799627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.628 qpair failed and we were unable to recover it. 00:29:13.628 [2024-10-30 14:16:11.809523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.628 [2024-10-30 14:16:11.809572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.628 [2024-10-30 14:16:11.809587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.628 [2024-10-30 14:16:11.809594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.628 [2024-10-30 14:16:11.809600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.628 [2024-10-30 14:16:11.809615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.628 qpair failed and we were unable to recover it. 00:29:13.628 [2024-10-30 14:16:11.819601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.628 [2024-10-30 14:16:11.819655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.628 [2024-10-30 14:16:11.819669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.628 [2024-10-30 14:16:11.819677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.628 [2024-10-30 14:16:11.819683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.628 [2024-10-30 14:16:11.819698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.628 qpair failed and we were unable to recover it. 00:29:13.628 [2024-10-30 14:16:11.829654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.628 [2024-10-30 14:16:11.829720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.628 [2024-10-30 14:16:11.829735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.628 [2024-10-30 14:16:11.829742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.628 [2024-10-30 14:16:11.829754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.628 [2024-10-30 14:16:11.829769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.628 qpair failed and we were unable to recover it. 00:29:13.628 [2024-10-30 14:16:11.839629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.628 [2024-10-30 14:16:11.839678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.628 [2024-10-30 14:16:11.839692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.628 [2024-10-30 14:16:11.839699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.628 [2024-10-30 14:16:11.839705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.628 [2024-10-30 14:16:11.839720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.628 qpair failed and we were unable to recover it. 00:29:13.628 [2024-10-30 14:16:11.849622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.628 [2024-10-30 14:16:11.849678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.628 [2024-10-30 14:16:11.849692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.628 [2024-10-30 14:16:11.849699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.628 [2024-10-30 14:16:11.849705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.628 [2024-10-30 14:16:11.849720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.628 qpair failed and we were unable to recover it. 00:29:13.628 [2024-10-30 14:16:11.859699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.628 [2024-10-30 14:16:11.859767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.628 [2024-10-30 14:16:11.859782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.628 [2024-10-30 14:16:11.859788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.628 [2024-10-30 14:16:11.859795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.628 [2024-10-30 14:16:11.859809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.628 qpair failed and we were unable to recover it. 00:29:13.628 [2024-10-30 14:16:11.869694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.628 [2024-10-30 14:16:11.869754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.628 [2024-10-30 14:16:11.869768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.628 [2024-10-30 14:16:11.869775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.628 [2024-10-30 14:16:11.869781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.628 [2024-10-30 14:16:11.869796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.628 qpair failed and we were unable to recover it. 00:29:13.628 [2024-10-30 14:16:11.879744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.628 [2024-10-30 14:16:11.879800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.628 [2024-10-30 14:16:11.879814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.628 [2024-10-30 14:16:11.879821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.628 [2024-10-30 14:16:11.879827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.628 [2024-10-30 14:16:11.879842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.628 qpair failed and we were unable to recover it. 00:29:13.628 [2024-10-30 14:16:11.889734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.628 [2024-10-30 14:16:11.889784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.628 [2024-10-30 14:16:11.889798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.628 [2024-10-30 14:16:11.889805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.628 [2024-10-30 14:16:11.889811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.628 [2024-10-30 14:16:11.889826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.628 qpair failed and we were unable to recover it. 00:29:13.628 [2024-10-30 14:16:11.899785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.628 [2024-10-30 14:16:11.899841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.628 [2024-10-30 14:16:11.899854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.628 [2024-10-30 14:16:11.899861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.628 [2024-10-30 14:16:11.899868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.628 [2024-10-30 14:16:11.899882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.628 qpair failed and we were unable to recover it. 00:29:13.628 [2024-10-30 14:16:11.909834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.628 [2024-10-30 14:16:11.909893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.628 [2024-10-30 14:16:11.909907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.628 [2024-10-30 14:16:11.909917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.628 [2024-10-30 14:16:11.909923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.628 [2024-10-30 14:16:11.909938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.628 qpair failed and we were unable to recover it. 00:29:13.628 [2024-10-30 14:16:11.919848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.628 [2024-10-30 14:16:11.919900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.628 [2024-10-30 14:16:11.919914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.628 [2024-10-30 14:16:11.919921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.629 [2024-10-30 14:16:11.919927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.629 [2024-10-30 14:16:11.919941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.629 qpair failed and we were unable to recover it. 00:29:13.890 [2024-10-30 14:16:11.929828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.890 [2024-10-30 14:16:11.929923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.890 [2024-10-30 14:16:11.929937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.890 [2024-10-30 14:16:11.929944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.890 [2024-10-30 14:16:11.929950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.890 [2024-10-30 14:16:11.929965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.890 qpair failed and we were unable to recover it. 00:29:13.890 [2024-10-30 14:16:11.939917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.890 [2024-10-30 14:16:11.939974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.890 [2024-10-30 14:16:11.939988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.890 [2024-10-30 14:16:11.939995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.890 [2024-10-30 14:16:11.940002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.890 [2024-10-30 14:16:11.940017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.890 qpair failed and we were unable to recover it. 00:29:13.890 [2024-10-30 14:16:11.949942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.891 [2024-10-30 14:16:11.949999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.891 [2024-10-30 14:16:11.950012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.891 [2024-10-30 14:16:11.950019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.891 [2024-10-30 14:16:11.950026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.891 [2024-10-30 14:16:11.950043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.891 qpair failed and we were unable to recover it. 00:29:13.891 [2024-10-30 14:16:11.959972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.891 [2024-10-30 14:16:11.960022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.891 [2024-10-30 14:16:11.960036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.891 [2024-10-30 14:16:11.960043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.891 [2024-10-30 14:16:11.960049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.891 [2024-10-30 14:16:11.960063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.891 qpair failed and we were unable to recover it. 00:29:13.891 [2024-10-30 14:16:11.969899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.891 [2024-10-30 14:16:11.969944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.891 [2024-10-30 14:16:11.969957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.891 [2024-10-30 14:16:11.969963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.891 [2024-10-30 14:16:11.969970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.891 [2024-10-30 14:16:11.969984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.891 qpair failed and we were unable to recover it. 00:29:13.891 [2024-10-30 14:16:11.980028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.891 [2024-10-30 14:16:11.980083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.891 [2024-10-30 14:16:11.980096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.891 [2024-10-30 14:16:11.980103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.891 [2024-10-30 14:16:11.980109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.891 [2024-10-30 14:16:11.980123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.891 qpair failed and we were unable to recover it. 00:29:13.891 [2024-10-30 14:16:11.990076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.891 [2024-10-30 14:16:11.990127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.891 [2024-10-30 14:16:11.990140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.891 [2024-10-30 14:16:11.990147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.891 [2024-10-30 14:16:11.990153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.891 [2024-10-30 14:16:11.990167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.891 qpair failed and we were unable to recover it. 00:29:13.891 [2024-10-30 14:16:11.999959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.891 [2024-10-30 14:16:12.000007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.891 [2024-10-30 14:16:12.000020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.891 [2024-10-30 14:16:12.000027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.891 [2024-10-30 14:16:12.000034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.891 [2024-10-30 14:16:12.000048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.891 qpair failed and we were unable to recover it. 00:29:13.891 [2024-10-30 14:16:12.010070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.891 [2024-10-30 14:16:12.010123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.891 [2024-10-30 14:16:12.010137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.891 [2024-10-30 14:16:12.010144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.891 [2024-10-30 14:16:12.010150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.891 [2024-10-30 14:16:12.010165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.891 qpair failed and we were unable to recover it. 00:29:13.891 [2024-10-30 14:16:12.020145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.891 [2024-10-30 14:16:12.020198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.891 [2024-10-30 14:16:12.020211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.891 [2024-10-30 14:16:12.020218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.891 [2024-10-30 14:16:12.020225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.891 [2024-10-30 14:16:12.020239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.891 qpair failed and we were unable to recover it. 00:29:13.891 [2024-10-30 14:16:12.030189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.891 [2024-10-30 14:16:12.030243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.891 [2024-10-30 14:16:12.030256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.891 [2024-10-30 14:16:12.030262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.891 [2024-10-30 14:16:12.030269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.891 [2024-10-30 14:16:12.030283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.891 qpair failed and we were unable to recover it. 00:29:13.891 [2024-10-30 14:16:12.040205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.891 [2024-10-30 14:16:12.040255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.891 [2024-10-30 14:16:12.040271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.891 [2024-10-30 14:16:12.040278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.891 [2024-10-30 14:16:12.040285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.891 [2024-10-30 14:16:12.040299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.891 qpair failed and we were unable to recover it. 00:29:13.891 [2024-10-30 14:16:12.050174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.891 [2024-10-30 14:16:12.050225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.891 [2024-10-30 14:16:12.050238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.891 [2024-10-30 14:16:12.050245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.891 [2024-10-30 14:16:12.050252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.891 [2024-10-30 14:16:12.050266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.891 qpair failed and we were unable to recover it. 00:29:13.891 [2024-10-30 14:16:12.060256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.891 [2024-10-30 14:16:12.060313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.891 [2024-10-30 14:16:12.060326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.891 [2024-10-30 14:16:12.060333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.891 [2024-10-30 14:16:12.060339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.891 [2024-10-30 14:16:12.060353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.891 qpair failed and we were unable to recover it. 00:29:13.891 [2024-10-30 14:16:12.070403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.891 [2024-10-30 14:16:12.070468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.891 [2024-10-30 14:16:12.070482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.891 [2024-10-30 14:16:12.070489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.891 [2024-10-30 14:16:12.070495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.891 [2024-10-30 14:16:12.070509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.891 qpair failed and we were unable to recover it. 00:29:13.891 [2024-10-30 14:16:12.080357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.891 [2024-10-30 14:16:12.080408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.891 [2024-10-30 14:16:12.080422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.891 [2024-10-30 14:16:12.080429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.891 [2024-10-30 14:16:12.080442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.891 [2024-10-30 14:16:12.080457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.892 qpair failed and we were unable to recover it. 00:29:13.892 [2024-10-30 14:16:12.090320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.892 [2024-10-30 14:16:12.090364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.892 [2024-10-30 14:16:12.090378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.892 [2024-10-30 14:16:12.090385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.892 [2024-10-30 14:16:12.090391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.892 [2024-10-30 14:16:12.090406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.892 qpair failed and we were unable to recover it. 00:29:13.892 [2024-10-30 14:16:12.100423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.892 [2024-10-30 14:16:12.100478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.892 [2024-10-30 14:16:12.100492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.892 [2024-10-30 14:16:12.100499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.892 [2024-10-30 14:16:12.100505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.892 [2024-10-30 14:16:12.100519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.892 qpair failed and we were unable to recover it. 00:29:13.892 [2024-10-30 14:16:12.110408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.892 [2024-10-30 14:16:12.110465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.892 [2024-10-30 14:16:12.110478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.892 [2024-10-30 14:16:12.110485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.892 [2024-10-30 14:16:12.110492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.892 [2024-10-30 14:16:12.110506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.892 qpair failed and we were unable to recover it. 00:29:13.892 [2024-10-30 14:16:12.120426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.892 [2024-10-30 14:16:12.120478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.892 [2024-10-30 14:16:12.120491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.892 [2024-10-30 14:16:12.120498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.892 [2024-10-30 14:16:12.120504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.892 [2024-10-30 14:16:12.120518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.892 qpair failed and we were unable to recover it. 00:29:13.892 [2024-10-30 14:16:12.130401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.892 [2024-10-30 14:16:12.130489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.892 [2024-10-30 14:16:12.130502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.892 [2024-10-30 14:16:12.130509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.892 [2024-10-30 14:16:12.130516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.892 [2024-10-30 14:16:12.130530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.892 qpair failed and we were unable to recover it. 00:29:13.892 [2024-10-30 14:16:12.140484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.892 [2024-10-30 14:16:12.140537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.892 [2024-10-30 14:16:12.140550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.892 [2024-10-30 14:16:12.140557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.892 [2024-10-30 14:16:12.140564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.892 [2024-10-30 14:16:12.140577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.892 qpair failed and we were unable to recover it. 00:29:13.892 [2024-10-30 14:16:12.150529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.892 [2024-10-30 14:16:12.150624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.892 [2024-10-30 14:16:12.150637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.892 [2024-10-30 14:16:12.150644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.892 [2024-10-30 14:16:12.150650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.892 [2024-10-30 14:16:12.150664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.892 qpair failed and we were unable to recover it. 00:29:13.892 [2024-10-30 14:16:12.160531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.892 [2024-10-30 14:16:12.160578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.892 [2024-10-30 14:16:12.160591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.892 [2024-10-30 14:16:12.160598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.892 [2024-10-30 14:16:12.160605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.892 [2024-10-30 14:16:12.160619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.892 qpair failed and we were unable to recover it. 00:29:13.892 [2024-10-30 14:16:12.170531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.892 [2024-10-30 14:16:12.170575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.892 [2024-10-30 14:16:12.170592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.892 [2024-10-30 14:16:12.170599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.892 [2024-10-30 14:16:12.170605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.892 [2024-10-30 14:16:12.170620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.892 qpair failed and we were unable to recover it. 00:29:13.892 [2024-10-30 14:16:12.180592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.892 [2024-10-30 14:16:12.180646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.892 [2024-10-30 14:16:12.180659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.892 [2024-10-30 14:16:12.180666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.892 [2024-10-30 14:16:12.180672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:13.892 [2024-10-30 14:16:12.180686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.892 qpair failed and we were unable to recover it. 00:29:14.155 [2024-10-30 14:16:12.190605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.155 [2024-10-30 14:16:12.190663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.155 [2024-10-30 14:16:12.190677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.155 [2024-10-30 14:16:12.190684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.155 [2024-10-30 14:16:12.190690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.155 [2024-10-30 14:16:12.190704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.155 qpair failed and we were unable to recover it. 00:29:14.155 [2024-10-30 14:16:12.200638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.155 [2024-10-30 14:16:12.200689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.155 [2024-10-30 14:16:12.200703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.155 [2024-10-30 14:16:12.200710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.155 [2024-10-30 14:16:12.200716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.155 [2024-10-30 14:16:12.200730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.155 qpair failed and we were unable to recover it. 00:29:14.155 [2024-10-30 14:16:12.210650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.155 [2024-10-30 14:16:12.210711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.155 [2024-10-30 14:16:12.210724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.155 [2024-10-30 14:16:12.210731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.155 [2024-10-30 14:16:12.210741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.155 [2024-10-30 14:16:12.210759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.155 qpair failed and we were unable to recover it. 00:29:14.155 [2024-10-30 14:16:12.220709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.155 [2024-10-30 14:16:12.220766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.155 [2024-10-30 14:16:12.220779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.155 [2024-10-30 14:16:12.220786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.155 [2024-10-30 14:16:12.220793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.155 [2024-10-30 14:16:12.220807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.155 qpair failed and we were unable to recover it. 00:29:14.155 [2024-10-30 14:16:12.230798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.155 [2024-10-30 14:16:12.230851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.155 [2024-10-30 14:16:12.230864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.155 [2024-10-30 14:16:12.230871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.155 [2024-10-30 14:16:12.230877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.155 [2024-10-30 14:16:12.230891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.155 qpair failed and we were unable to recover it. 00:29:14.155 [2024-10-30 14:16:12.240627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.155 [2024-10-30 14:16:12.240679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.155 [2024-10-30 14:16:12.240692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.156 [2024-10-30 14:16:12.240699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.156 [2024-10-30 14:16:12.240706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.156 [2024-10-30 14:16:12.240720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.156 qpair failed and we were unable to recover it. 00:29:14.156 [2024-10-30 14:16:12.250756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.156 [2024-10-30 14:16:12.250798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.156 [2024-10-30 14:16:12.250811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.156 [2024-10-30 14:16:12.250818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.156 [2024-10-30 14:16:12.250825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.156 [2024-10-30 14:16:12.250839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.156 qpair failed and we were unable to recover it. 00:29:14.156 [2024-10-30 14:16:12.260816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.156 [2024-10-30 14:16:12.260869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.156 [2024-10-30 14:16:12.260883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.156 [2024-10-30 14:16:12.260890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.156 [2024-10-30 14:16:12.260896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.156 [2024-10-30 14:16:12.260910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.156 qpair failed and we were unable to recover it. 00:29:14.156 [2024-10-30 14:16:12.270789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.156 [2024-10-30 14:16:12.270845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.156 [2024-10-30 14:16:12.270858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.156 [2024-10-30 14:16:12.270865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.156 [2024-10-30 14:16:12.270871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.156 [2024-10-30 14:16:12.270886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.156 qpair failed and we were unable to recover it. 00:29:14.156 [2024-10-30 14:16:12.280849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.156 [2024-10-30 14:16:12.280908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.156 [2024-10-30 14:16:12.280923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.156 [2024-10-30 14:16:12.280931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.156 [2024-10-30 14:16:12.280937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.156 [2024-10-30 14:16:12.280955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.156 qpair failed and we were unable to recover it. 00:29:14.156 [2024-10-30 14:16:12.290855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.156 [2024-10-30 14:16:12.290910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.156 [2024-10-30 14:16:12.290924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.156 [2024-10-30 14:16:12.290931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.156 [2024-10-30 14:16:12.290937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.156 [2024-10-30 14:16:12.290952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.156 qpair failed and we were unable to recover it. 00:29:14.156 [2024-10-30 14:16:12.300928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.156 [2024-10-30 14:16:12.300982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.156 [2024-10-30 14:16:12.300999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.156 [2024-10-30 14:16:12.301006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.156 [2024-10-30 14:16:12.301012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.156 [2024-10-30 14:16:12.301026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.156 qpair failed and we were unable to recover it. 00:29:14.156 [2024-10-30 14:16:12.310976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.156 [2024-10-30 14:16:12.311034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.156 [2024-10-30 14:16:12.311048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.156 [2024-10-30 14:16:12.311055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.156 [2024-10-30 14:16:12.311061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.156 [2024-10-30 14:16:12.311075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.156 qpair failed and we were unable to recover it. 00:29:14.156 [2024-10-30 14:16:12.320990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.156 [2024-10-30 14:16:12.321063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.156 [2024-10-30 14:16:12.321076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.156 [2024-10-30 14:16:12.321083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.156 [2024-10-30 14:16:12.321089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.156 [2024-10-30 14:16:12.321103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.156 qpair failed and we were unable to recover it. 00:29:14.156 [2024-10-30 14:16:12.330955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.156 [2024-10-30 14:16:12.331001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.156 [2024-10-30 14:16:12.331014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.156 [2024-10-30 14:16:12.331021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.156 [2024-10-30 14:16:12.331028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.156 [2024-10-30 14:16:12.331041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.156 qpair failed and we were unable to recover it. 00:29:14.156 [2024-10-30 14:16:12.341052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.156 [2024-10-30 14:16:12.341105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.156 [2024-10-30 14:16:12.341119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.156 [2024-10-30 14:16:12.341129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.156 [2024-10-30 14:16:12.341135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.156 [2024-10-30 14:16:12.341150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.156 qpair failed and we were unable to recover it. 00:29:14.156 [2024-10-30 14:16:12.351079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.156 [2024-10-30 14:16:12.351134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.156 [2024-10-30 14:16:12.351147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.156 [2024-10-30 14:16:12.351154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.156 [2024-10-30 14:16:12.351161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.156 [2024-10-30 14:16:12.351175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.156 qpair failed and we were unable to recover it. 00:29:14.156 [2024-10-30 14:16:12.361057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.156 [2024-10-30 14:16:12.361151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.156 [2024-10-30 14:16:12.361164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.156 [2024-10-30 14:16:12.361171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.156 [2024-10-30 14:16:12.361178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.156 [2024-10-30 14:16:12.361192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.156 qpair failed and we were unable to recover it. 00:29:14.156 [2024-10-30 14:16:12.371071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.156 [2024-10-30 14:16:12.371120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.156 [2024-10-30 14:16:12.371133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.156 [2024-10-30 14:16:12.371140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.156 [2024-10-30 14:16:12.371147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.156 [2024-10-30 14:16:12.371161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.156 qpair failed and we were unable to recover it. 00:29:14.156 [2024-10-30 14:16:12.381125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.156 [2024-10-30 14:16:12.381191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.157 [2024-10-30 14:16:12.381204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.157 [2024-10-30 14:16:12.381210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.157 [2024-10-30 14:16:12.381217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.157 [2024-10-30 14:16:12.381231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.157 qpair failed and we were unable to recover it. 00:29:14.157 [2024-10-30 14:16:12.391196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.157 [2024-10-30 14:16:12.391252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.157 [2024-10-30 14:16:12.391265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.157 [2024-10-30 14:16:12.391273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.157 [2024-10-30 14:16:12.391279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.157 [2024-10-30 14:16:12.391293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.157 qpair failed and we were unable to recover it. 00:29:14.157 [2024-10-30 14:16:12.401157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.157 [2024-10-30 14:16:12.401202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.157 [2024-10-30 14:16:12.401215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.157 [2024-10-30 14:16:12.401224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.157 [2024-10-30 14:16:12.401231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.157 [2024-10-30 14:16:12.401246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.157 qpair failed and we were unable to recover it. 00:29:14.157 [2024-10-30 14:16:12.411180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.157 [2024-10-30 14:16:12.411229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.157 [2024-10-30 14:16:12.411242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.157 [2024-10-30 14:16:12.411249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.157 [2024-10-30 14:16:12.411256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.157 [2024-10-30 14:16:12.411270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.157 qpair failed and we were unable to recover it. 00:29:14.157 [2024-10-30 14:16:12.421255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.157 [2024-10-30 14:16:12.421336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.157 [2024-10-30 14:16:12.421349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.157 [2024-10-30 14:16:12.421357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.157 [2024-10-30 14:16:12.421363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.157 [2024-10-30 14:16:12.421377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.157 qpair failed and we were unable to recover it. 00:29:14.157 [2024-10-30 14:16:12.431279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.157 [2024-10-30 14:16:12.431368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.157 [2024-10-30 14:16:12.431382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.157 [2024-10-30 14:16:12.431388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.157 [2024-10-30 14:16:12.431395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.157 [2024-10-30 14:16:12.431409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.157 qpair failed and we were unable to recover it. 00:29:14.157 [2024-10-30 14:16:12.441149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.157 [2024-10-30 14:16:12.441197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.157 [2024-10-30 14:16:12.441214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.157 [2024-10-30 14:16:12.441221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.157 [2024-10-30 14:16:12.441230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.157 [2024-10-30 14:16:12.441246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.157 qpair failed and we were unable to recover it. 00:29:14.157 [2024-10-30 14:16:12.451308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.157 [2024-10-30 14:16:12.451389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.157 [2024-10-30 14:16:12.451402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.157 [2024-10-30 14:16:12.451409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.157 [2024-10-30 14:16:12.451416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.157 [2024-10-30 14:16:12.451430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.157 qpair failed and we were unable to recover it. 00:29:14.419 [2024-10-30 14:16:12.461362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.419 [2024-10-30 14:16:12.461414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.419 [2024-10-30 14:16:12.461428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.419 [2024-10-30 14:16:12.461435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.419 [2024-10-30 14:16:12.461442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.419 [2024-10-30 14:16:12.461456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-10-30 14:16:12.471395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.419 [2024-10-30 14:16:12.471448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.419 [2024-10-30 14:16:12.471461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.419 [2024-10-30 14:16:12.471472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.419 [2024-10-30 14:16:12.471478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.419 [2024-10-30 14:16:12.471497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-10-30 14:16:12.481370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.419 [2024-10-30 14:16:12.481423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.419 [2024-10-30 14:16:12.481437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.419 [2024-10-30 14:16:12.481443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.419 [2024-10-30 14:16:12.481450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.419 [2024-10-30 14:16:12.481464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-10-30 14:16:12.491423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.419 [2024-10-30 14:16:12.491476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.419 [2024-10-30 14:16:12.491491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.420 [2024-10-30 14:16:12.491498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.420 [2024-10-30 14:16:12.491504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.420 [2024-10-30 14:16:12.491522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-10-30 14:16:12.501475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.420 [2024-10-30 14:16:12.501530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.420 [2024-10-30 14:16:12.501545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.420 [2024-10-30 14:16:12.501551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.420 [2024-10-30 14:16:12.501558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.420 [2024-10-30 14:16:12.501572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-10-30 14:16:12.511529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.420 [2024-10-30 14:16:12.511588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.420 [2024-10-30 14:16:12.511601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.420 [2024-10-30 14:16:12.511608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.420 [2024-10-30 14:16:12.511614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.420 [2024-10-30 14:16:12.511633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-10-30 14:16:12.521478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.420 [2024-10-30 14:16:12.521527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.420 [2024-10-30 14:16:12.521541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.420 [2024-10-30 14:16:12.521548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.420 [2024-10-30 14:16:12.521554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.420 [2024-10-30 14:16:12.521568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-10-30 14:16:12.531496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.420 [2024-10-30 14:16:12.531543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.420 [2024-10-30 14:16:12.531556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.420 [2024-10-30 14:16:12.531563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.420 [2024-10-30 14:16:12.531569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.420 [2024-10-30 14:16:12.531584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-10-30 14:16:12.541593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.420 [2024-10-30 14:16:12.541649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.420 [2024-10-30 14:16:12.541663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.420 [2024-10-30 14:16:12.541670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.420 [2024-10-30 14:16:12.541676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.420 [2024-10-30 14:16:12.541691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-10-30 14:16:12.551612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.420 [2024-10-30 14:16:12.551678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.420 [2024-10-30 14:16:12.551691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.420 [2024-10-30 14:16:12.551698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.420 [2024-10-30 14:16:12.551704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.420 [2024-10-30 14:16:12.551719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-10-30 14:16:12.561609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.420 [2024-10-30 14:16:12.561658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.420 [2024-10-30 14:16:12.561672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.420 [2024-10-30 14:16:12.561679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.420 [2024-10-30 14:16:12.561686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.420 [2024-10-30 14:16:12.561700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-10-30 14:16:12.571602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.420 [2024-10-30 14:16:12.571649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.420 [2024-10-30 14:16:12.571662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.420 [2024-10-30 14:16:12.571669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.420 [2024-10-30 14:16:12.571676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.420 [2024-10-30 14:16:12.571690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-10-30 14:16:12.581712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.420 [2024-10-30 14:16:12.581766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.420 [2024-10-30 14:16:12.581780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.420 [2024-10-30 14:16:12.581787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.420 [2024-10-30 14:16:12.581793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.420 [2024-10-30 14:16:12.581807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-10-30 14:16:12.591739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.420 [2024-10-30 14:16:12.591796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.420 [2024-10-30 14:16:12.591809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.420 [2024-10-30 14:16:12.591816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.420 [2024-10-30 14:16:12.591822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.420 [2024-10-30 14:16:12.591837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-10-30 14:16:12.601713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.420 [2024-10-30 14:16:12.601764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.420 [2024-10-30 14:16:12.601781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.420 [2024-10-30 14:16:12.601789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.420 [2024-10-30 14:16:12.601796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.420 [2024-10-30 14:16:12.601811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-10-30 14:16:12.611737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.420 [2024-10-30 14:16:12.611794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.420 [2024-10-30 14:16:12.611808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.420 [2024-10-30 14:16:12.611815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.420 [2024-10-30 14:16:12.611821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.420 [2024-10-30 14:16:12.611835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-10-30 14:16:12.621810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.420 [2024-10-30 14:16:12.621867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.420 [2024-10-30 14:16:12.621880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.420 [2024-10-30 14:16:12.621887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.420 [2024-10-30 14:16:12.621894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.420 [2024-10-30 14:16:12.621908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-10-30 14:16:12.631886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.420 [2024-10-30 14:16:12.631951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.421 [2024-10-30 14:16:12.631964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.421 [2024-10-30 14:16:12.631971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.421 [2024-10-30 14:16:12.631978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.421 [2024-10-30 14:16:12.631992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-10-30 14:16:12.641820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.421 [2024-10-30 14:16:12.641866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.421 [2024-10-30 14:16:12.641879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.421 [2024-10-30 14:16:12.641886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.421 [2024-10-30 14:16:12.641896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.421 [2024-10-30 14:16:12.641911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-10-30 14:16:12.651899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.421 [2024-10-30 14:16:12.651986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.421 [2024-10-30 14:16:12.651999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.421 [2024-10-30 14:16:12.652006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.421 [2024-10-30 14:16:12.652012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.421 [2024-10-30 14:16:12.652026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-10-30 14:16:12.661839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.421 [2024-10-30 14:16:12.661938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.421 [2024-10-30 14:16:12.661952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.421 [2024-10-30 14:16:12.661959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.421 [2024-10-30 14:16:12.661965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.421 [2024-10-30 14:16:12.661979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-10-30 14:16:12.671837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.421 [2024-10-30 14:16:12.671887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.421 [2024-10-30 14:16:12.671900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.421 [2024-10-30 14:16:12.671907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.421 [2024-10-30 14:16:12.671913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.421 [2024-10-30 14:16:12.671928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-10-30 14:16:12.681940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.421 [2024-10-30 14:16:12.681988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.421 [2024-10-30 14:16:12.682001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.421 [2024-10-30 14:16:12.682007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.421 [2024-10-30 14:16:12.682014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.421 [2024-10-30 14:16:12.682028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-10-30 14:16:12.691955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.421 [2024-10-30 14:16:12.692005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.421 [2024-10-30 14:16:12.692018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.421 [2024-10-30 14:16:12.692025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.421 [2024-10-30 14:16:12.692031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.421 [2024-10-30 14:16:12.692046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-10-30 14:16:12.702009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.421 [2024-10-30 14:16:12.702061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.421 [2024-10-30 14:16:12.702075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.421 [2024-10-30 14:16:12.702082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.421 [2024-10-30 14:16:12.702088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.421 [2024-10-30 14:16:12.702102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-10-30 14:16:12.712096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.421 [2024-10-30 14:16:12.712147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.421 [2024-10-30 14:16:12.712160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.421 [2024-10-30 14:16:12.712166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.421 [2024-10-30 14:16:12.712173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.421 [2024-10-30 14:16:12.712187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.685 [2024-10-30 14:16:12.722055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.685 [2024-10-30 14:16:12.722099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.685 [2024-10-30 14:16:12.722112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.685 [2024-10-30 14:16:12.722119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.685 [2024-10-30 14:16:12.722126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.685 [2024-10-30 14:16:12.722140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-10-30 14:16:12.731948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.685 [2024-10-30 14:16:12.731998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.685 [2024-10-30 14:16:12.732015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.685 [2024-10-30 14:16:12.732023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.685 [2024-10-30 14:16:12.732029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.685 [2024-10-30 14:16:12.732044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-10-30 14:16:12.742168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.685 [2024-10-30 14:16:12.742223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.685 [2024-10-30 14:16:12.742237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.685 [2024-10-30 14:16:12.742244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.685 [2024-10-30 14:16:12.742250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.685 [2024-10-30 14:16:12.742264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-10-30 14:16:12.752152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.685 [2024-10-30 14:16:12.752206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.685 [2024-10-30 14:16:12.752219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.685 [2024-10-30 14:16:12.752226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.685 [2024-10-30 14:16:12.752233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.685 [2024-10-30 14:16:12.752246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-10-30 14:16:12.762136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.685 [2024-10-30 14:16:12.762202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.685 [2024-10-30 14:16:12.762215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.685 [2024-10-30 14:16:12.762222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.685 [2024-10-30 14:16:12.762229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.685 [2024-10-30 14:16:12.762243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-10-30 14:16:12.772185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.685 [2024-10-30 14:16:12.772230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.685 [2024-10-30 14:16:12.772243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.685 [2024-10-30 14:16:12.772250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.685 [2024-10-30 14:16:12.772263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.685 [2024-10-30 14:16:12.772278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-10-30 14:16:12.782253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.685 [2024-10-30 14:16:12.782307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.685 [2024-10-30 14:16:12.782320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.685 [2024-10-30 14:16:12.782327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.685 [2024-10-30 14:16:12.782333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.685 [2024-10-30 14:16:12.782348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-10-30 14:16:12.792279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.685 [2024-10-30 14:16:12.792385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.685 [2024-10-30 14:16:12.792398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.685 [2024-10-30 14:16:12.792406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.685 [2024-10-30 14:16:12.792412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.685 [2024-10-30 14:16:12.792426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-10-30 14:16:12.802239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.685 [2024-10-30 14:16:12.802286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.685 [2024-10-30 14:16:12.802300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.685 [2024-10-30 14:16:12.802307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.685 [2024-10-30 14:16:12.802313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.685 [2024-10-30 14:16:12.802328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-10-30 14:16:12.812276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.685 [2024-10-30 14:16:12.812319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.685 [2024-10-30 14:16:12.812333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.685 [2024-10-30 14:16:12.812340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.685 [2024-10-30 14:16:12.812346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.685 [2024-10-30 14:16:12.812360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-10-30 14:16:12.822366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.685 [2024-10-30 14:16:12.822421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.685 [2024-10-30 14:16:12.822434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.685 [2024-10-30 14:16:12.822441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.685 [2024-10-30 14:16:12.822448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.685 [2024-10-30 14:16:12.822461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-10-30 14:16:12.832377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.685 [2024-10-30 14:16:12.832435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.685 [2024-10-30 14:16:12.832448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.685 [2024-10-30 14:16:12.832455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.685 [2024-10-30 14:16:12.832462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.685 [2024-10-30 14:16:12.832476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-10-30 14:16:12.842381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.685 [2024-10-30 14:16:12.842423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.685 [2024-10-30 14:16:12.842436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.686 [2024-10-30 14:16:12.842443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.686 [2024-10-30 14:16:12.842450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.686 [2024-10-30 14:16:12.842464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-10-30 14:16:12.852389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.686 [2024-10-30 14:16:12.852446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.686 [2024-10-30 14:16:12.852471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.686 [2024-10-30 14:16:12.852479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.686 [2024-10-30 14:16:12.852487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.686 [2024-10-30 14:16:12.852506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-10-30 14:16:12.862470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.686 [2024-10-30 14:16:12.862527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.686 [2024-10-30 14:16:12.862553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.686 [2024-10-30 14:16:12.862560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.686 [2024-10-30 14:16:12.862567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.686 [2024-10-30 14:16:12.862583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-10-30 14:16:12.872512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.686 [2024-10-30 14:16:12.872568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.686 [2024-10-30 14:16:12.872593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.686 [2024-10-30 14:16:12.872602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.686 [2024-10-30 14:16:12.872609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.686 [2024-10-30 14:16:12.872629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-10-30 14:16:12.882484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.686 [2024-10-30 14:16:12.882530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.686 [2024-10-30 14:16:12.882545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.686 [2024-10-30 14:16:12.882553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.686 [2024-10-30 14:16:12.882559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.686 [2024-10-30 14:16:12.882575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-10-30 14:16:12.892510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.686 [2024-10-30 14:16:12.892557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.686 [2024-10-30 14:16:12.892571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.686 [2024-10-30 14:16:12.892579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.686 [2024-10-30 14:16:12.892585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.686 [2024-10-30 14:16:12.892599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-10-30 14:16:12.902581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.686 [2024-10-30 14:16:12.902635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.686 [2024-10-30 14:16:12.902648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.686 [2024-10-30 14:16:12.902660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.686 [2024-10-30 14:16:12.902667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.686 [2024-10-30 14:16:12.902681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-10-30 14:16:12.912604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.686 [2024-10-30 14:16:12.912657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.686 [2024-10-30 14:16:12.912670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.686 [2024-10-30 14:16:12.912677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.686 [2024-10-30 14:16:12.912683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.686 [2024-10-30 14:16:12.912698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-10-30 14:16:12.922459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.686 [2024-10-30 14:16:12.922511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.686 [2024-10-30 14:16:12.922524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.686 [2024-10-30 14:16:12.922531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.686 [2024-10-30 14:16:12.922538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.686 [2024-10-30 14:16:12.922552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-10-30 14:16:12.932618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.686 [2024-10-30 14:16:12.932663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.686 [2024-10-30 14:16:12.932677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.686 [2024-10-30 14:16:12.932684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.686 [2024-10-30 14:16:12.932691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.686 [2024-10-30 14:16:12.932705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-10-30 14:16:12.942697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.686 [2024-10-30 14:16:12.942752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.686 [2024-10-30 14:16:12.942766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.686 [2024-10-30 14:16:12.942773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.686 [2024-10-30 14:16:12.942779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.686 [2024-10-30 14:16:12.942799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-10-30 14:16:12.952712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.686 [2024-10-30 14:16:12.952773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.686 [2024-10-30 14:16:12.952788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.686 [2024-10-30 14:16:12.952797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.686 [2024-10-30 14:16:12.952804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.686 [2024-10-30 14:16:12.952820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-10-30 14:16:12.962696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.686 [2024-10-30 14:16:12.962744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.686 [2024-10-30 14:16:12.962762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.686 [2024-10-30 14:16:12.962769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.686 [2024-10-30 14:16:12.962775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.686 [2024-10-30 14:16:12.962790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-10-30 14:16:12.972728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.686 [2024-10-30 14:16:12.972773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.686 [2024-10-30 14:16:12.972786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.686 [2024-10-30 14:16:12.972793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.686 [2024-10-30 14:16:12.972800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.686 [2024-10-30 14:16:12.972814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-10-30 14:16:12.982798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.686 [2024-10-30 14:16:12.982854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.687 [2024-10-30 14:16:12.982868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.687 [2024-10-30 14:16:12.982875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.687 [2024-10-30 14:16:12.982881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.687 [2024-10-30 14:16:12.982895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.949 [2024-10-30 14:16:12.992817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.949 [2024-10-30 14:16:12.992880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.949 [2024-10-30 14:16:12.992894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.949 [2024-10-30 14:16:12.992900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.949 [2024-10-30 14:16:12.992907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.949 [2024-10-30 14:16:12.992921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-10-30 14:16:13.002806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.949 [2024-10-30 14:16:13.002856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.949 [2024-10-30 14:16:13.002870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.949 [2024-10-30 14:16:13.002877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.949 [2024-10-30 14:16:13.002883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.949 [2024-10-30 14:16:13.002898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-10-30 14:16:13.012823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.949 [2024-10-30 14:16:13.012875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.949 [2024-10-30 14:16:13.012888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.949 [2024-10-30 14:16:13.012895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.949 [2024-10-30 14:16:13.012901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.949 [2024-10-30 14:16:13.012916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-10-30 14:16:13.022899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.949 [2024-10-30 14:16:13.022951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.949 [2024-10-30 14:16:13.022965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.949 [2024-10-30 14:16:13.022972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.949 [2024-10-30 14:16:13.022978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.949 [2024-10-30 14:16:13.022992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-10-30 14:16:13.032938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.949 [2024-10-30 14:16:13.033000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.949 [2024-10-30 14:16:13.033013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.949 [2024-10-30 14:16:13.033024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.949 [2024-10-30 14:16:13.033030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.949 [2024-10-30 14:16:13.033044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-10-30 14:16:13.042779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.949 [2024-10-30 14:16:13.042830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.949 [2024-10-30 14:16:13.042843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.949 [2024-10-30 14:16:13.042851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.949 [2024-10-30 14:16:13.042857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.949 [2024-10-30 14:16:13.042872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-10-30 14:16:13.052933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.950 [2024-10-30 14:16:13.052979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.950 [2024-10-30 14:16:13.052993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.950 [2024-10-30 14:16:13.053000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.950 [2024-10-30 14:16:13.053006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.950 [2024-10-30 14:16:13.053020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-10-30 14:16:13.062991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.950 [2024-10-30 14:16:13.063050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.950 [2024-10-30 14:16:13.063063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.950 [2024-10-30 14:16:13.063070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.950 [2024-10-30 14:16:13.063076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.950 [2024-10-30 14:16:13.063090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-10-30 14:16:13.073030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.950 [2024-10-30 14:16:13.073082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.950 [2024-10-30 14:16:13.073095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.950 [2024-10-30 14:16:13.073102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.950 [2024-10-30 14:16:13.073109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.950 [2024-10-30 14:16:13.073126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-10-30 14:16:13.083035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.950 [2024-10-30 14:16:13.083082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.950 [2024-10-30 14:16:13.083095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.950 [2024-10-30 14:16:13.083102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.950 [2024-10-30 14:16:13.083108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.950 [2024-10-30 14:16:13.083122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-10-30 14:16:13.092965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.950 [2024-10-30 14:16:13.093018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.950 [2024-10-30 14:16:13.093041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.950 [2024-10-30 14:16:13.093049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.950 [2024-10-30 14:16:13.093055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.950 [2024-10-30 14:16:13.093075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-10-30 14:16:13.103112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.950 [2024-10-30 14:16:13.103170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.950 [2024-10-30 14:16:13.103183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.950 [2024-10-30 14:16:13.103190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.950 [2024-10-30 14:16:13.103197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.950 [2024-10-30 14:16:13.103211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-10-30 14:16:13.113148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.950 [2024-10-30 14:16:13.113201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.950 [2024-10-30 14:16:13.113215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.950 [2024-10-30 14:16:13.113222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.950 [2024-10-30 14:16:13.113228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.950 [2024-10-30 14:16:13.113242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-10-30 14:16:13.122994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.950 [2024-10-30 14:16:13.123042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.950 [2024-10-30 14:16:13.123057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.950 [2024-10-30 14:16:13.123064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.950 [2024-10-30 14:16:13.123070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.950 [2024-10-30 14:16:13.123085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-10-30 14:16:13.133146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.950 [2024-10-30 14:16:13.133195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.950 [2024-10-30 14:16:13.133209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.950 [2024-10-30 14:16:13.133216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.950 [2024-10-30 14:16:13.133222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.950 [2024-10-30 14:16:13.133236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-10-30 14:16:13.143218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.950 [2024-10-30 14:16:13.143272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.950 [2024-10-30 14:16:13.143285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.950 [2024-10-30 14:16:13.143293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.950 [2024-10-30 14:16:13.143299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.950 [2024-10-30 14:16:13.143313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-10-30 14:16:13.153239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.950 [2024-10-30 14:16:13.153340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.950 [2024-10-30 14:16:13.153353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.950 [2024-10-30 14:16:13.153360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.950 [2024-10-30 14:16:13.153366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.950 [2024-10-30 14:16:13.153381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-10-30 14:16:13.163206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.950 [2024-10-30 14:16:13.163253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.950 [2024-10-30 14:16:13.163270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.950 [2024-10-30 14:16:13.163277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.950 [2024-10-30 14:16:13.163283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.950 [2024-10-30 14:16:13.163297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-10-30 14:16:13.173252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.950 [2024-10-30 14:16:13.173297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.950 [2024-10-30 14:16:13.173310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.950 [2024-10-30 14:16:13.173317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.950 [2024-10-30 14:16:13.173323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.950 [2024-10-30 14:16:13.173337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-10-30 14:16:13.183319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.950 [2024-10-30 14:16:13.183375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.950 [2024-10-30 14:16:13.183388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.950 [2024-10-30 14:16:13.183395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.950 [2024-10-30 14:16:13.183401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.950 [2024-10-30 14:16:13.183415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-10-30 14:16:13.193349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.951 [2024-10-30 14:16:13.193410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.951 [2024-10-30 14:16:13.193424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.951 [2024-10-30 14:16:13.193430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.951 [2024-10-30 14:16:13.193437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.951 [2024-10-30 14:16:13.193451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-10-30 14:16:13.203309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.951 [2024-10-30 14:16:13.203357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.951 [2024-10-30 14:16:13.203370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.951 [2024-10-30 14:16:13.203377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.951 [2024-10-30 14:16:13.203387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.951 [2024-10-30 14:16:13.203402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-10-30 14:16:13.213322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.951 [2024-10-30 14:16:13.213406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.951 [2024-10-30 14:16:13.213419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.951 [2024-10-30 14:16:13.213426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.951 [2024-10-30 14:16:13.213432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.951 [2024-10-30 14:16:13.213447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-10-30 14:16:13.223435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.951 [2024-10-30 14:16:13.223488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.951 [2024-10-30 14:16:13.223503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.951 [2024-10-30 14:16:13.223510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.951 [2024-10-30 14:16:13.223516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.951 [2024-10-30 14:16:13.223530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-10-30 14:16:13.233444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.951 [2024-10-30 14:16:13.233509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.951 [2024-10-30 14:16:13.233522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.951 [2024-10-30 14:16:13.233529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.951 [2024-10-30 14:16:13.233536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.951 [2024-10-30 14:16:13.233550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-10-30 14:16:13.243427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.951 [2024-10-30 14:16:13.243472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.951 [2024-10-30 14:16:13.243485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.951 [2024-10-30 14:16:13.243492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.951 [2024-10-30 14:16:13.243498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:14.951 [2024-10-30 14:16:13.243512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.951 qpair failed and we were unable to recover it. 00:29:15.213 [2024-10-30 14:16:13.253444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.213 [2024-10-30 14:16:13.253491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.213 [2024-10-30 14:16:13.253504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.213 [2024-10-30 14:16:13.253511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.213 [2024-10-30 14:16:13.253518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.213 [2024-10-30 14:16:13.253532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.213 qpair failed and we were unable to recover it. 00:29:15.213 [2024-10-30 14:16:13.263528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.213 [2024-10-30 14:16:13.263591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.213 [2024-10-30 14:16:13.263616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.213 [2024-10-30 14:16:13.263624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.213 [2024-10-30 14:16:13.263631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.213 [2024-10-30 14:16:13.263651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.213 qpair failed and we were unable to recover it. 00:29:15.213 [2024-10-30 14:16:13.273545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.213 [2024-10-30 14:16:13.273606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.213 [2024-10-30 14:16:13.273621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.213 [2024-10-30 14:16:13.273628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.213 [2024-10-30 14:16:13.273635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.213 [2024-10-30 14:16:13.273650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.213 qpair failed and we were unable to recover it. 00:29:15.213 [2024-10-30 14:16:13.283554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.213 [2024-10-30 14:16:13.283599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.213 [2024-10-30 14:16:13.283613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.213 [2024-10-30 14:16:13.283620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.213 [2024-10-30 14:16:13.283626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.213 [2024-10-30 14:16:13.283640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.213 qpair failed and we were unable to recover it. 00:29:15.213 [2024-10-30 14:16:13.293561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.213 [2024-10-30 14:16:13.293610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.213 [2024-10-30 14:16:13.293629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.213 [2024-10-30 14:16:13.293636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.213 [2024-10-30 14:16:13.293642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.213 [2024-10-30 14:16:13.293657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.213 qpair failed and we were unable to recover it. 00:29:15.213 [2024-10-30 14:16:13.303625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.213 [2024-10-30 14:16:13.303681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.213 [2024-10-30 14:16:13.303695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.213 [2024-10-30 14:16:13.303702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.214 [2024-10-30 14:16:13.303708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.214 [2024-10-30 14:16:13.303722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.214 qpair failed and we were unable to recover it. 00:29:15.214 [2024-10-30 14:16:13.313644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.214 [2024-10-30 14:16:13.313699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.214 [2024-10-30 14:16:13.313713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.214 [2024-10-30 14:16:13.313719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.214 [2024-10-30 14:16:13.313726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.214 [2024-10-30 14:16:13.313740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.214 qpair failed and we were unable to recover it. 00:29:15.214 [2024-10-30 14:16:13.323671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.214 [2024-10-30 14:16:13.323716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.214 [2024-10-30 14:16:13.323730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.214 [2024-10-30 14:16:13.323737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.214 [2024-10-30 14:16:13.323743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.214 [2024-10-30 14:16:13.323761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.214 qpair failed and we were unable to recover it. 00:29:15.214 [2024-10-30 14:16:13.333698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.214 [2024-10-30 14:16:13.333741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.214 [2024-10-30 14:16:13.333758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.214 [2024-10-30 14:16:13.333765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.214 [2024-10-30 14:16:13.333775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.214 [2024-10-30 14:16:13.333789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.214 qpair failed and we were unable to recover it. 00:29:15.214 [2024-10-30 14:16:13.343634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.214 [2024-10-30 14:16:13.343696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.214 [2024-10-30 14:16:13.343709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.214 [2024-10-30 14:16:13.343716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.214 [2024-10-30 14:16:13.343723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.214 [2024-10-30 14:16:13.343737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.214 qpair failed and we were unable to recover it. 00:29:15.214 [2024-10-30 14:16:13.353789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.214 [2024-10-30 14:16:13.353859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.214 [2024-10-30 14:16:13.353873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.214 [2024-10-30 14:16:13.353880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.214 [2024-10-30 14:16:13.353886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.214 [2024-10-30 14:16:13.353900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.214 qpair failed and we were unable to recover it. 00:29:15.214 [2024-10-30 14:16:13.363773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.214 [2024-10-30 14:16:13.363822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.214 [2024-10-30 14:16:13.363836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.214 [2024-10-30 14:16:13.363843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.214 [2024-10-30 14:16:13.363849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.214 [2024-10-30 14:16:13.363864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.214 qpair failed and we were unable to recover it. 00:29:15.214 [2024-10-30 14:16:13.373799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.214 [2024-10-30 14:16:13.373848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.214 [2024-10-30 14:16:13.373862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.214 [2024-10-30 14:16:13.373869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.214 [2024-10-30 14:16:13.373875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.214 [2024-10-30 14:16:13.373889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.214 qpair failed and we were unable to recover it. 00:29:15.214 [2024-10-30 14:16:13.383879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.214 [2024-10-30 14:16:13.383933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.214 [2024-10-30 14:16:13.383947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.214 [2024-10-30 14:16:13.383954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.214 [2024-10-30 14:16:13.383960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.214 [2024-10-30 14:16:13.383974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.214 qpair failed and we were unable to recover it. 00:29:15.214 [2024-10-30 14:16:13.393894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.214 [2024-10-30 14:16:13.393952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.214 [2024-10-30 14:16:13.393965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.214 [2024-10-30 14:16:13.393972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.214 [2024-10-30 14:16:13.393979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.214 [2024-10-30 14:16:13.393993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.214 qpair failed and we were unable to recover it. 00:29:15.214 [2024-10-30 14:16:13.403874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.214 [2024-10-30 14:16:13.403918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.214 [2024-10-30 14:16:13.403931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.214 [2024-10-30 14:16:13.403938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.214 [2024-10-30 14:16:13.403945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.214 [2024-10-30 14:16:13.403959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.214 qpair failed and we were unable to recover it. 00:29:15.214 [2024-10-30 14:16:13.413917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.214 [2024-10-30 14:16:13.413967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.214 [2024-10-30 14:16:13.413981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.214 [2024-10-30 14:16:13.413988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.214 [2024-10-30 14:16:13.413994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.214 [2024-10-30 14:16:13.414008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.214 qpair failed and we were unable to recover it. 00:29:15.214 [2024-10-30 14:16:13.423998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.214 [2024-10-30 14:16:13.424056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.214 [2024-10-30 14:16:13.424073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.214 [2024-10-30 14:16:13.424080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.214 [2024-10-30 14:16:13.424086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.214 [2024-10-30 14:16:13.424101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.214 qpair failed and we were unable to recover it. 00:29:15.214 [2024-10-30 14:16:13.434041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.214 [2024-10-30 14:16:13.434126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.214 [2024-10-30 14:16:13.434140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.214 [2024-10-30 14:16:13.434147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.214 [2024-10-30 14:16:13.434153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.214 [2024-10-30 14:16:13.434168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.214 qpair failed and we were unable to recover it. 00:29:15.214 [2024-10-30 14:16:13.443989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.214 [2024-10-30 14:16:13.444072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.214 [2024-10-30 14:16:13.444086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.215 [2024-10-30 14:16:13.444093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.215 [2024-10-30 14:16:13.444099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.215 [2024-10-30 14:16:13.444113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.215 qpair failed and we were unable to recover it. 00:29:15.215 [2024-10-30 14:16:13.454035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.215 [2024-10-30 14:16:13.454123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.215 [2024-10-30 14:16:13.454136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.215 [2024-10-30 14:16:13.454143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.215 [2024-10-30 14:16:13.454150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.215 [2024-10-30 14:16:13.454165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.215 qpair failed and we were unable to recover it. 00:29:15.215 [2024-10-30 14:16:13.464102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.215 [2024-10-30 14:16:13.464176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.215 [2024-10-30 14:16:13.464189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.215 [2024-10-30 14:16:13.464203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.215 [2024-10-30 14:16:13.464210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.215 [2024-10-30 14:16:13.464224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.215 qpair failed and we were unable to recover it. 00:29:15.215 [2024-10-30 14:16:13.474074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.215 [2024-10-30 14:16:13.474128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.215 [2024-10-30 14:16:13.474141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.215 [2024-10-30 14:16:13.474148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.215 [2024-10-30 14:16:13.474154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.215 [2024-10-30 14:16:13.474168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.215 qpair failed and we were unable to recover it. 00:29:15.215 [2024-10-30 14:16:13.483958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.215 [2024-10-30 14:16:13.484010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.215 [2024-10-30 14:16:13.484023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.215 [2024-10-30 14:16:13.484030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.215 [2024-10-30 14:16:13.484036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.215 [2024-10-30 14:16:13.484050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.215 qpair failed and we were unable to recover it. 00:29:15.215 [2024-10-30 14:16:13.494106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.215 [2024-10-30 14:16:13.494152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.215 [2024-10-30 14:16:13.494165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.215 [2024-10-30 14:16:13.494172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.215 [2024-10-30 14:16:13.494178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.215 [2024-10-30 14:16:13.494192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.215 qpair failed and we were unable to recover it. 00:29:15.215 [2024-10-30 14:16:13.504190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.215 [2024-10-30 14:16:13.504274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.215 [2024-10-30 14:16:13.504287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.215 [2024-10-30 14:16:13.504294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.215 [2024-10-30 14:16:13.504301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.215 [2024-10-30 14:16:13.504318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.215 qpair failed and we were unable to recover it. 00:29:15.478 [2024-10-30 14:16:13.514192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.478 [2024-10-30 14:16:13.514244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.478 [2024-10-30 14:16:13.514257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.478 [2024-10-30 14:16:13.514264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.478 [2024-10-30 14:16:13.514270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.478 [2024-10-30 14:16:13.514284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.478 qpair failed and we were unable to recover it. 00:29:15.478 [2024-10-30 14:16:13.524152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.479 [2024-10-30 14:16:13.524223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.479 [2024-10-30 14:16:13.524237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.479 [2024-10-30 14:16:13.524244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.479 [2024-10-30 14:16:13.524250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.479 [2024-10-30 14:16:13.524264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.479 qpair failed and we were unable to recover it. 00:29:15.479 [2024-10-30 14:16:13.534229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.479 [2024-10-30 14:16:13.534279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.479 [2024-10-30 14:16:13.534292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.479 [2024-10-30 14:16:13.534299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.479 [2024-10-30 14:16:13.534305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.479 [2024-10-30 14:16:13.534319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.479 qpair failed and we were unable to recover it. 00:29:15.479 [2024-10-30 14:16:13.544295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.479 [2024-10-30 14:16:13.544347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.479 [2024-10-30 14:16:13.544360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.479 [2024-10-30 14:16:13.544367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.479 [2024-10-30 14:16:13.544374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.479 [2024-10-30 14:16:13.544388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.479 qpair failed and we were unable to recover it. 00:29:15.479 [2024-10-30 14:16:13.554354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.479 [2024-10-30 14:16:13.554414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.479 [2024-10-30 14:16:13.554427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.479 [2024-10-30 14:16:13.554434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.479 [2024-10-30 14:16:13.554441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.479 [2024-10-30 14:16:13.554455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.479 qpair failed and we were unable to recover it. 00:29:15.479 [2024-10-30 14:16:13.564302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.479 [2024-10-30 14:16:13.564352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.479 [2024-10-30 14:16:13.564365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.479 [2024-10-30 14:16:13.564372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.479 [2024-10-30 14:16:13.564378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.479 [2024-10-30 14:16:13.564393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.479 qpair failed and we were unable to recover it. 00:29:15.479 [2024-10-30 14:16:13.574343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.479 [2024-10-30 14:16:13.574388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.479 [2024-10-30 14:16:13.574401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.479 [2024-10-30 14:16:13.574408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.479 [2024-10-30 14:16:13.574415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.479 [2024-10-30 14:16:13.574429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.479 qpair failed and we were unable to recover it. 00:29:15.479 [2024-10-30 14:16:13.584415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.479 [2024-10-30 14:16:13.584470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.479 [2024-10-30 14:16:13.584484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.479 [2024-10-30 14:16:13.584492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.479 [2024-10-30 14:16:13.584498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.479 [2024-10-30 14:16:13.584512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.479 qpair failed and we were unable to recover it. 00:29:15.479 [2024-10-30 14:16:13.594430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.479 [2024-10-30 14:16:13.594481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.479 [2024-10-30 14:16:13.594495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.479 [2024-10-30 14:16:13.594505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.479 [2024-10-30 14:16:13.594512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.479 [2024-10-30 14:16:13.594526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.479 qpair failed and we were unable to recover it. 00:29:15.479 [2024-10-30 14:16:13.604407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.479 [2024-10-30 14:16:13.604450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.479 [2024-10-30 14:16:13.604464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.479 [2024-10-30 14:16:13.604471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.479 [2024-10-30 14:16:13.604477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.479 [2024-10-30 14:16:13.604492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.479 qpair failed and we were unable to recover it. 00:29:15.479 [2024-10-30 14:16:13.614448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.479 [2024-10-30 14:16:13.614510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.480 [2024-10-30 14:16:13.614523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.480 [2024-10-30 14:16:13.614531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.480 [2024-10-30 14:16:13.614537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.480 [2024-10-30 14:16:13.614551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.480 qpair failed and we were unable to recover it. 00:29:15.480 [2024-10-30 14:16:13.624488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.480 [2024-10-30 14:16:13.624542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.480 [2024-10-30 14:16:13.624555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.480 [2024-10-30 14:16:13.624562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.480 [2024-10-30 14:16:13.624569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.480 [2024-10-30 14:16:13.624583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.480 qpair failed and we were unable to recover it. 00:29:15.480 [2024-10-30 14:16:13.634514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.480 [2024-10-30 14:16:13.634563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.480 [2024-10-30 14:16:13.634576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.480 [2024-10-30 14:16:13.634583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.480 [2024-10-30 14:16:13.634589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.480 [2024-10-30 14:16:13.634607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.480 qpair failed and we were unable to recover it. 00:29:15.480 [2024-10-30 14:16:13.644514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.480 [2024-10-30 14:16:13.644564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.480 [2024-10-30 14:16:13.644578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.480 [2024-10-30 14:16:13.644585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.480 [2024-10-30 14:16:13.644591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.480 [2024-10-30 14:16:13.644605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.480 qpair failed and we were unable to recover it. 00:29:15.480 [2024-10-30 14:16:13.654542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.480 [2024-10-30 14:16:13.654588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.480 [2024-10-30 14:16:13.654601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.480 [2024-10-30 14:16:13.654609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.480 [2024-10-30 14:16:13.654615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.480 [2024-10-30 14:16:13.654629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.480 qpair failed and we were unable to recover it. 00:29:15.480 [2024-10-30 14:16:13.664607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.480 [2024-10-30 14:16:13.664658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.480 [2024-10-30 14:16:13.664672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.480 [2024-10-30 14:16:13.664679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.480 [2024-10-30 14:16:13.664685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.480 [2024-10-30 14:16:13.664699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.480 qpair failed and we were unable to recover it. 00:29:15.480 [2024-10-30 14:16:13.674610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.480 [2024-10-30 14:16:13.674657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.480 [2024-10-30 14:16:13.674671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.480 [2024-10-30 14:16:13.674678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.480 [2024-10-30 14:16:13.674684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.480 [2024-10-30 14:16:13.674698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.480 qpair failed and we were unable to recover it. 00:29:15.480 [2024-10-30 14:16:13.684612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.480 [2024-10-30 14:16:13.684658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.480 [2024-10-30 14:16:13.684672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.480 [2024-10-30 14:16:13.684679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.480 [2024-10-30 14:16:13.684685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.480 [2024-10-30 14:16:13.684700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.480 qpair failed and we were unable to recover it. 00:29:15.480 [2024-10-30 14:16:13.694663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.480 [2024-10-30 14:16:13.694707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.480 [2024-10-30 14:16:13.694721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.480 [2024-10-30 14:16:13.694728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.480 [2024-10-30 14:16:13.694734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.480 [2024-10-30 14:16:13.694752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.480 qpair failed and we were unable to recover it. 00:29:15.480 [2024-10-30 14:16:13.704760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.480 [2024-10-30 14:16:13.704845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.480 [2024-10-30 14:16:13.704858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.480 [2024-10-30 14:16:13.704865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.480 [2024-10-30 14:16:13.704871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.480 [2024-10-30 14:16:13.704886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.480 qpair failed and we were unable to recover it. 00:29:15.480 [2024-10-30 14:16:13.714733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.481 [2024-10-30 14:16:13.714788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.481 [2024-10-30 14:16:13.714802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.481 [2024-10-30 14:16:13.714809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.481 [2024-10-30 14:16:13.714815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.481 [2024-10-30 14:16:13.714830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.481 qpair failed and we were unable to recover it. 00:29:15.481 [2024-10-30 14:16:13.724709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.481 [2024-10-30 14:16:13.724760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.481 [2024-10-30 14:16:13.724777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.481 [2024-10-30 14:16:13.724784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.481 [2024-10-30 14:16:13.724791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.481 [2024-10-30 14:16:13.724806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.481 qpair failed and we were unable to recover it. 00:29:15.481 [2024-10-30 14:16:13.734719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.481 [2024-10-30 14:16:13.734789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.481 [2024-10-30 14:16:13.734803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.481 [2024-10-30 14:16:13.734810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.481 [2024-10-30 14:16:13.734817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.481 [2024-10-30 14:16:13.734831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.481 qpair failed and we were unable to recover it. 00:29:15.481 [2024-10-30 14:16:13.744859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.481 [2024-10-30 14:16:13.744946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.481 [2024-10-30 14:16:13.744959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.481 [2024-10-30 14:16:13.744966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.481 [2024-10-30 14:16:13.744973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.481 [2024-10-30 14:16:13.744987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.481 qpair failed and we were unable to recover it. 00:29:15.481 [2024-10-30 14:16:13.754836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.481 [2024-10-30 14:16:13.754888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.481 [2024-10-30 14:16:13.754901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.481 [2024-10-30 14:16:13.754908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.481 [2024-10-30 14:16:13.754915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.481 [2024-10-30 14:16:13.754929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.481 qpair failed and we were unable to recover it. 00:29:15.481 [2024-10-30 14:16:13.764718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.481 [2024-10-30 14:16:13.764766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.481 [2024-10-30 14:16:13.764779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.481 [2024-10-30 14:16:13.764786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.481 [2024-10-30 14:16:13.764797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.481 [2024-10-30 14:16:13.764811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.481 qpair failed and we were unable to recover it. 00:29:15.481 [2024-10-30 14:16:13.774881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.481 [2024-10-30 14:16:13.774926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.481 [2024-10-30 14:16:13.774940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.481 [2024-10-30 14:16:13.774947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.481 [2024-10-30 14:16:13.774953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.481 [2024-10-30 14:16:13.774967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.481 qpair failed and we were unable to recover it. 00:29:15.744 [2024-10-30 14:16:13.784915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.744 [2024-10-30 14:16:13.784974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.744 [2024-10-30 14:16:13.784988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.744 [2024-10-30 14:16:13.784995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.744 [2024-10-30 14:16:13.785001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.744 [2024-10-30 14:16:13.785015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.744 qpair failed and we were unable to recover it. 00:29:15.744 [2024-10-30 14:16:13.794936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.744 [2024-10-30 14:16:13.794985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.744 [2024-10-30 14:16:13.794998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.744 [2024-10-30 14:16:13.795005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.744 [2024-10-30 14:16:13.795012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.744 [2024-10-30 14:16:13.795026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.744 qpair failed and we were unable to recover it. 00:29:15.744 [2024-10-30 14:16:13.804925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.744 [2024-10-30 14:16:13.804972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.744 [2024-10-30 14:16:13.804985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.744 [2024-10-30 14:16:13.804992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.744 [2024-10-30 14:16:13.804998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.744 [2024-10-30 14:16:13.805012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.744 qpair failed and we were unable to recover it. 00:29:15.744 [2024-10-30 14:16:13.814894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.744 [2024-10-30 14:16:13.814958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.744 [2024-10-30 14:16:13.814971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.744 [2024-10-30 14:16:13.814978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.744 [2024-10-30 14:16:13.814984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.744 [2024-10-30 14:16:13.814998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.744 qpair failed and we were unable to recover it. 00:29:15.744 [2024-10-30 14:16:13.824935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.744 [2024-10-30 14:16:13.825002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.744 [2024-10-30 14:16:13.825015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.744 [2024-10-30 14:16:13.825023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.744 [2024-10-30 14:16:13.825029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.745 [2024-10-30 14:16:13.825043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.745 qpair failed and we were unable to recover it. 00:29:15.745 [2024-10-30 14:16:13.835040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.745 [2024-10-30 14:16:13.835092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.745 [2024-10-30 14:16:13.835105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.745 [2024-10-30 14:16:13.835112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.745 [2024-10-30 14:16:13.835119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.745 [2024-10-30 14:16:13.835133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.745 qpair failed and we were unable to recover it. 00:29:15.745 [2024-10-30 14:16:13.845068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.745 [2024-10-30 14:16:13.845144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.745 [2024-10-30 14:16:13.845157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.745 [2024-10-30 14:16:13.845164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.745 [2024-10-30 14:16:13.845171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.745 [2024-10-30 14:16:13.845185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.745 qpair failed and we were unable to recover it. 00:29:15.745 [2024-10-30 14:16:13.855054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.745 [2024-10-30 14:16:13.855102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.745 [2024-10-30 14:16:13.855119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.745 [2024-10-30 14:16:13.855126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.745 [2024-10-30 14:16:13.855132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.745 [2024-10-30 14:16:13.855146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.745 qpair failed and we were unable to recover it. 00:29:15.745 [2024-10-30 14:16:13.865166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.745 [2024-10-30 14:16:13.865259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.745 [2024-10-30 14:16:13.865272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.745 [2024-10-30 14:16:13.865279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.745 [2024-10-30 14:16:13.865286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.745 [2024-10-30 14:16:13.865300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.745 qpair failed and we were unable to recover it. 00:29:15.745 [2024-10-30 14:16:13.875141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.745 [2024-10-30 14:16:13.875189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.745 [2024-10-30 14:16:13.875202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.745 [2024-10-30 14:16:13.875209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.745 [2024-10-30 14:16:13.875216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.745 [2024-10-30 14:16:13.875230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.745 qpair failed and we were unable to recover it. 00:29:15.745 [2024-10-30 14:16:13.885164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.745 [2024-10-30 14:16:13.885212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.745 [2024-10-30 14:16:13.885226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.745 [2024-10-30 14:16:13.885233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.745 [2024-10-30 14:16:13.885239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.745 [2024-10-30 14:16:13.885253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.745 qpair failed and we were unable to recover it. 00:29:15.745 [2024-10-30 14:16:13.895194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.745 [2024-10-30 14:16:13.895239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.745 [2024-10-30 14:16:13.895252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.745 [2024-10-30 14:16:13.895259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.745 [2024-10-30 14:16:13.895268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.745 [2024-10-30 14:16:13.895283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.745 qpair failed and we were unable to recover it. 00:29:15.745 [2024-10-30 14:16:13.905261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.745 [2024-10-30 14:16:13.905312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.745 [2024-10-30 14:16:13.905325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.745 [2024-10-30 14:16:13.905333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.745 [2024-10-30 14:16:13.905339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.745 [2024-10-30 14:16:13.905353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.745 qpair failed and we were unable to recover it. 00:29:15.745 [2024-10-30 14:16:13.915268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.745 [2024-10-30 14:16:13.915368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.745 [2024-10-30 14:16:13.915381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.745 [2024-10-30 14:16:13.915388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.745 [2024-10-30 14:16:13.915394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.745 [2024-10-30 14:16:13.915408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.745 qpair failed and we were unable to recover it. 00:29:15.745 [2024-10-30 14:16:13.925223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.745 [2024-10-30 14:16:13.925271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.745 [2024-10-30 14:16:13.925284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.745 [2024-10-30 14:16:13.925291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.745 [2024-10-30 14:16:13.925298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.745 [2024-10-30 14:16:13.925312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.745 qpair failed and we were unable to recover it. 00:29:15.745 [2024-10-30 14:16:13.935266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.745 [2024-10-30 14:16:13.935313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.746 [2024-10-30 14:16:13.935326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.746 [2024-10-30 14:16:13.935333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.746 [2024-10-30 14:16:13.935340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.746 [2024-10-30 14:16:13.935354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.746 qpair failed and we were unable to recover it. 00:29:15.746 [2024-10-30 14:16:13.945370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.746 [2024-10-30 14:16:13.945422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.746 [2024-10-30 14:16:13.945436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.746 [2024-10-30 14:16:13.945443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.746 [2024-10-30 14:16:13.945449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.746 [2024-10-30 14:16:13.945464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.746 qpair failed and we were unable to recover it. 00:29:15.746 [2024-10-30 14:16:13.955244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.746 [2024-10-30 14:16:13.955291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.746 [2024-10-30 14:16:13.955306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.746 [2024-10-30 14:16:13.955313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.746 [2024-10-30 14:16:13.955319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.746 [2024-10-30 14:16:13.955334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.746 qpair failed and we were unable to recover it. 00:29:15.746 [2024-10-30 14:16:13.965377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.746 [2024-10-30 14:16:13.965466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.746 [2024-10-30 14:16:13.965480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.746 [2024-10-30 14:16:13.965487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.746 [2024-10-30 14:16:13.965493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.746 [2024-10-30 14:16:13.965507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.746 qpair failed and we were unable to recover it. 00:29:15.746 [2024-10-30 14:16:13.975412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.746 [2024-10-30 14:16:13.975456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.746 [2024-10-30 14:16:13.975470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.746 [2024-10-30 14:16:13.975477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.746 [2024-10-30 14:16:13.975484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.746 [2024-10-30 14:16:13.975498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.746 qpair failed and we were unable to recover it. 00:29:15.746 [2024-10-30 14:16:13.985489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.746 [2024-10-30 14:16:13.985546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.746 [2024-10-30 14:16:13.985564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.746 [2024-10-30 14:16:13.985571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.746 [2024-10-30 14:16:13.985578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.746 [2024-10-30 14:16:13.985596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.746 qpair failed and we were unable to recover it. 00:29:15.746 [2024-10-30 14:16:13.995495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.746 [2024-10-30 14:16:13.995549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.746 [2024-10-30 14:16:13.995574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.746 [2024-10-30 14:16:13.995582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.746 [2024-10-30 14:16:13.995589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.746 [2024-10-30 14:16:13.995609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.746 qpair failed and we were unable to recover it. 00:29:15.746 [2024-10-30 14:16:14.005528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.746 [2024-10-30 14:16:14.005580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.746 [2024-10-30 14:16:14.005595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.746 [2024-10-30 14:16:14.005603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.746 [2024-10-30 14:16:14.005609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.746 [2024-10-30 14:16:14.005625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.746 qpair failed and we were unable to recover it. 00:29:15.746 [2024-10-30 14:16:14.015530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.746 [2024-10-30 14:16:14.015587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.746 [2024-10-30 14:16:14.015601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.746 [2024-10-30 14:16:14.015608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.746 [2024-10-30 14:16:14.015614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.746 [2024-10-30 14:16:14.015629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.746 qpair failed and we were unable to recover it. 00:29:15.746 [2024-10-30 14:16:14.025481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.746 [2024-10-30 14:16:14.025535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.746 [2024-10-30 14:16:14.025549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.746 [2024-10-30 14:16:14.025561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.746 [2024-10-30 14:16:14.025567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.746 [2024-10-30 14:16:14.025582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.746 qpair failed and we were unable to recover it. 00:29:15.746 [2024-10-30 14:16:14.035594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.746 [2024-10-30 14:16:14.035641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.746 [2024-10-30 14:16:14.035655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.746 [2024-10-30 14:16:14.035662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.746 [2024-10-30 14:16:14.035668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:15.747 [2024-10-30 14:16:14.035683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.747 qpair failed and we were unable to recover it. 00:29:16.009 [2024-10-30 14:16:14.045613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.009 [2024-10-30 14:16:14.045659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.009 [2024-10-30 14:16:14.045673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.009 [2024-10-30 14:16:14.045680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.009 [2024-10-30 14:16:14.045687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.009 [2024-10-30 14:16:14.045701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.009 qpair failed and we were unable to recover it. 00:29:16.009 [2024-10-30 14:16:14.055623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.009 [2024-10-30 14:16:14.055693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.009 [2024-10-30 14:16:14.055706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.009 [2024-10-30 14:16:14.055713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.009 [2024-10-30 14:16:14.055720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.009 [2024-10-30 14:16:14.055734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.009 qpair failed and we were unable to recover it. 00:29:16.009 [2024-10-30 14:16:14.065681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.009 [2024-10-30 14:16:14.065736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.009 [2024-10-30 14:16:14.065755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.009 [2024-10-30 14:16:14.065762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.009 [2024-10-30 14:16:14.065769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.009 [2024-10-30 14:16:14.065787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.009 qpair failed and we were unable to recover it. 00:29:16.009 [2024-10-30 14:16:14.075578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.009 [2024-10-30 14:16:14.075631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.009 [2024-10-30 14:16:14.075645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.009 [2024-10-30 14:16:14.075652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.009 [2024-10-30 14:16:14.075658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.009 [2024-10-30 14:16:14.075673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.009 qpair failed and we were unable to recover it. 00:29:16.009 [2024-10-30 14:16:14.085580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.009 [2024-10-30 14:16:14.085629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.009 [2024-10-30 14:16:14.085642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.009 [2024-10-30 14:16:14.085650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.009 [2024-10-30 14:16:14.085656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.009 [2024-10-30 14:16:14.085670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.009 qpair failed and we were unable to recover it. 00:29:16.009 [2024-10-30 14:16:14.095740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.009 [2024-10-30 14:16:14.095817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.009 [2024-10-30 14:16:14.095831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.009 [2024-10-30 14:16:14.095838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.009 [2024-10-30 14:16:14.095844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.009 [2024-10-30 14:16:14.095858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.009 qpair failed and we were unable to recover it. 00:29:16.009 [2024-10-30 14:16:14.105763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.009 [2024-10-30 14:16:14.105819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.009 [2024-10-30 14:16:14.105834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.009 [2024-10-30 14:16:14.105841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.009 [2024-10-30 14:16:14.105848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.009 [2024-10-30 14:16:14.105867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.010 qpair failed and we were unable to recover it. 00:29:16.010 [2024-10-30 14:16:14.115814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.010 [2024-10-30 14:16:14.115878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.010 [2024-10-30 14:16:14.115892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.010 [2024-10-30 14:16:14.115899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.010 [2024-10-30 14:16:14.115906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.010 [2024-10-30 14:16:14.115920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.010 qpair failed and we were unable to recover it. 00:29:16.010 [2024-10-30 14:16:14.125816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.010 [2024-10-30 14:16:14.125862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.010 [2024-10-30 14:16:14.125875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.010 [2024-10-30 14:16:14.125882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.010 [2024-10-30 14:16:14.125889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.010 [2024-10-30 14:16:14.125903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.010 qpair failed and we were unable to recover it. 00:29:16.010 [2024-10-30 14:16:14.135837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.010 [2024-10-30 14:16:14.135883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.010 [2024-10-30 14:16:14.135897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.010 [2024-10-30 14:16:14.135904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.010 [2024-10-30 14:16:14.135910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.010 [2024-10-30 14:16:14.135925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.010 qpair failed and we were unable to recover it. 00:29:16.010 [2024-10-30 14:16:14.145904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.010 [2024-10-30 14:16:14.145955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.010 [2024-10-30 14:16:14.145968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.010 [2024-10-30 14:16:14.145975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.010 [2024-10-30 14:16:14.145982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.010 [2024-10-30 14:16:14.145996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.010 qpair failed and we were unable to recover it. 00:29:16.010 [2024-10-30 14:16:14.155879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.010 [2024-10-30 14:16:14.155930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.010 [2024-10-30 14:16:14.155945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.010 [2024-10-30 14:16:14.155956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.010 [2024-10-30 14:16:14.155964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.010 [2024-10-30 14:16:14.155981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.010 qpair failed and we were unable to recover it. 00:29:16.010 [2024-10-30 14:16:14.165918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.010 [2024-10-30 14:16:14.165963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.010 [2024-10-30 14:16:14.165976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.010 [2024-10-30 14:16:14.165984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.010 [2024-10-30 14:16:14.165990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.010 [2024-10-30 14:16:14.166004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.010 qpair failed and we were unable to recover it. 00:29:16.010 [2024-10-30 14:16:14.175964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.010 [2024-10-30 14:16:14.176009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.010 [2024-10-30 14:16:14.176022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.010 [2024-10-30 14:16:14.176029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.010 [2024-10-30 14:16:14.176035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.010 [2024-10-30 14:16:14.176049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.010 qpair failed and we were unable to recover it. 00:29:16.010 [2024-10-30 14:16:14.186037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.010 [2024-10-30 14:16:14.186090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.010 [2024-10-30 14:16:14.186104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.010 [2024-10-30 14:16:14.186111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.010 [2024-10-30 14:16:14.186117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.010 [2024-10-30 14:16:14.186131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.010 qpair failed and we were unable to recover it. 00:29:16.010 [2024-10-30 14:16:14.195910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.010 [2024-10-30 14:16:14.195958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.010 [2024-10-30 14:16:14.195971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.010 [2024-10-30 14:16:14.195978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.010 [2024-10-30 14:16:14.195985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.010 [2024-10-30 14:16:14.196006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.010 qpair failed and we were unable to recover it. 00:29:16.010 [2024-10-30 14:16:14.206048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.010 [2024-10-30 14:16:14.206097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.010 [2024-10-30 14:16:14.206111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.010 [2024-10-30 14:16:14.206118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.010 [2024-10-30 14:16:14.206125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.010 [2024-10-30 14:16:14.206139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.010 qpair failed and we were unable to recover it. 00:29:16.010 [2024-10-30 14:16:14.216006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.010 [2024-10-30 14:16:14.216052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.010 [2024-10-30 14:16:14.216066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.010 [2024-10-30 14:16:14.216073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.010 [2024-10-30 14:16:14.216079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.010 [2024-10-30 14:16:14.216093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.010 qpair failed and we were unable to recover it. 00:29:16.010 [2024-10-30 14:16:14.226064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.010 [2024-10-30 14:16:14.226117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.010 [2024-10-30 14:16:14.226130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.010 [2024-10-30 14:16:14.226137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.010 [2024-10-30 14:16:14.226144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.010 [2024-10-30 14:16:14.226158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.010 qpair failed and we were unable to recover it. 00:29:16.010 [2024-10-30 14:16:14.236117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.010 [2024-10-30 14:16:14.236164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.010 [2024-10-30 14:16:14.236177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.010 [2024-10-30 14:16:14.236184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.010 [2024-10-30 14:16:14.236191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.010 [2024-10-30 14:16:14.236205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.010 qpair failed and we were unable to recover it. 00:29:16.010 [2024-10-30 14:16:14.246123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.010 [2024-10-30 14:16:14.246176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.010 [2024-10-30 14:16:14.246190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.011 [2024-10-30 14:16:14.246197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.011 [2024-10-30 14:16:14.246203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.011 [2024-10-30 14:16:14.246217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.011 qpair failed and we were unable to recover it. 00:29:16.011 [2024-10-30 14:16:14.256153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.011 [2024-10-30 14:16:14.256201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.011 [2024-10-30 14:16:14.256215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.011 [2024-10-30 14:16:14.256222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.011 [2024-10-30 14:16:14.256228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.011 [2024-10-30 14:16:14.256242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.011 qpair failed and we were unable to recover it. 00:29:16.011 [2024-10-30 14:16:14.266145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.011 [2024-10-30 14:16:14.266222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.011 [2024-10-30 14:16:14.266235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.011 [2024-10-30 14:16:14.266242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.011 [2024-10-30 14:16:14.266248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.011 [2024-10-30 14:16:14.266262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.011 qpair failed and we were unable to recover it. 00:29:16.011 [2024-10-30 14:16:14.276107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.011 [2024-10-30 14:16:14.276160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.011 [2024-10-30 14:16:14.276173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.011 [2024-10-30 14:16:14.276180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.011 [2024-10-30 14:16:14.276186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.011 [2024-10-30 14:16:14.276200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.011 qpair failed and we were unable to recover it. 00:29:16.011 [2024-10-30 14:16:14.286131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.011 [2024-10-30 14:16:14.286179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.011 [2024-10-30 14:16:14.286197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.011 [2024-10-30 14:16:14.286204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.011 [2024-10-30 14:16:14.286210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.011 [2024-10-30 14:16:14.286225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.011 qpair failed and we were unable to recover it. 00:29:16.011 [2024-10-30 14:16:14.296261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.011 [2024-10-30 14:16:14.296306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.011 [2024-10-30 14:16:14.296319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.011 [2024-10-30 14:16:14.296326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.011 [2024-10-30 14:16:14.296332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.011 [2024-10-30 14:16:14.296346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.011 qpair failed and we were unable to recover it. 00:29:16.011 [2024-10-30 14:16:14.306352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.011 [2024-10-30 14:16:14.306425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.011 [2024-10-30 14:16:14.306439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.011 [2024-10-30 14:16:14.306446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.011 [2024-10-30 14:16:14.306452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.011 [2024-10-30 14:16:14.306467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.011 qpair failed and we were unable to recover it. 00:29:16.274 [2024-10-30 14:16:14.316268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.274 [2024-10-30 14:16:14.316339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.274 [2024-10-30 14:16:14.316353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.274 [2024-10-30 14:16:14.316360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.274 [2024-10-30 14:16:14.316367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.274 [2024-10-30 14:16:14.316381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.274 qpair failed and we were unable to recover it. 00:29:16.274 [2024-10-30 14:16:14.326334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.274 [2024-10-30 14:16:14.326376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.274 [2024-10-30 14:16:14.326389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.274 [2024-10-30 14:16:14.326397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.274 [2024-10-30 14:16:14.326406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.274 [2024-10-30 14:16:14.326421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.274 qpair failed and we were unable to recover it. 00:29:16.274 [2024-10-30 14:16:14.336348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.274 [2024-10-30 14:16:14.336395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.274 [2024-10-30 14:16:14.336408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.274 [2024-10-30 14:16:14.336415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.274 [2024-10-30 14:16:14.336422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.274 [2024-10-30 14:16:14.336436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.274 qpair failed and we were unable to recover it. 00:29:16.274 [2024-10-30 14:16:14.346386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.274 [2024-10-30 14:16:14.346432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.274 [2024-10-30 14:16:14.346446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.274 [2024-10-30 14:16:14.346453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.274 [2024-10-30 14:16:14.346459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.274 [2024-10-30 14:16:14.346473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.274 qpair failed and we were unable to recover it. 00:29:16.274 [2024-10-30 14:16:14.356430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.274 [2024-10-30 14:16:14.356474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.274 [2024-10-30 14:16:14.356488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.274 [2024-10-30 14:16:14.356495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.274 [2024-10-30 14:16:14.356502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.274 [2024-10-30 14:16:14.356516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.274 qpair failed and we were unable to recover it. 00:29:16.274 [2024-10-30 14:16:14.366301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.274 [2024-10-30 14:16:14.366347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.274 [2024-10-30 14:16:14.366360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.274 [2024-10-30 14:16:14.366367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.274 [2024-10-30 14:16:14.366374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.274 [2024-10-30 14:16:14.366388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.274 qpair failed and we were unable to recover it. 00:29:16.274 [2024-10-30 14:16:14.376443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.274 [2024-10-30 14:16:14.376488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.274 [2024-10-30 14:16:14.376502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.274 [2024-10-30 14:16:14.376509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.274 [2024-10-30 14:16:14.376515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.274 [2024-10-30 14:16:14.376529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.274 qpair failed and we were unable to recover it. 00:29:16.274 [2024-10-30 14:16:14.386484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.274 [2024-10-30 14:16:14.386575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.274 [2024-10-30 14:16:14.386589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.274 [2024-10-30 14:16:14.386596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.274 [2024-10-30 14:16:14.386602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.275 [2024-10-30 14:16:14.386616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.275 qpair failed and we were unable to recover it. 00:29:16.275 [2024-10-30 14:16:14.396532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.275 [2024-10-30 14:16:14.396585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.275 [2024-10-30 14:16:14.396610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.275 [2024-10-30 14:16:14.396619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.275 [2024-10-30 14:16:14.396626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.275 [2024-10-30 14:16:14.396646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.275 qpair failed and we were unable to recover it. 00:29:16.275 [2024-10-30 14:16:14.406502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.275 [2024-10-30 14:16:14.406545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.275 [2024-10-30 14:16:14.406561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.275 [2024-10-30 14:16:14.406568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.275 [2024-10-30 14:16:14.406575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.275 [2024-10-30 14:16:14.406590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.275 qpair failed and we were unable to recover it. 00:29:16.275 [2024-10-30 14:16:14.416561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.275 [2024-10-30 14:16:14.416629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.275 [2024-10-30 14:16:14.416648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.275 [2024-10-30 14:16:14.416655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.275 [2024-10-30 14:16:14.416662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.275 [2024-10-30 14:16:14.416676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.275 qpair failed and we were unable to recover it. 00:29:16.275 [2024-10-30 14:16:14.426590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.275 [2024-10-30 14:16:14.426634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.275 [2024-10-30 14:16:14.426648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.275 [2024-10-30 14:16:14.426655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.275 [2024-10-30 14:16:14.426661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.275 [2024-10-30 14:16:14.426675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.275 qpair failed and we were unable to recover it. 00:29:16.275 [2024-10-30 14:16:14.436626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.275 [2024-10-30 14:16:14.436680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.275 [2024-10-30 14:16:14.436695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.275 [2024-10-30 14:16:14.436702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.275 [2024-10-30 14:16:14.436709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.275 [2024-10-30 14:16:14.436728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.275 qpair failed and we were unable to recover it. 00:29:16.275 [2024-10-30 14:16:14.446608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.275 [2024-10-30 14:16:14.446652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.275 [2024-10-30 14:16:14.446667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.275 [2024-10-30 14:16:14.446674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.275 [2024-10-30 14:16:14.446680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.275 [2024-10-30 14:16:14.446695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.275 qpair failed and we were unable to recover it. 00:29:16.275 [2024-10-30 14:16:14.456642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.275 [2024-10-30 14:16:14.456687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.275 [2024-10-30 14:16:14.456701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.275 [2024-10-30 14:16:14.456708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.275 [2024-10-30 14:16:14.456719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.275 [2024-10-30 14:16:14.456733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.275 qpair failed and we were unable to recover it. 00:29:16.275 [2024-10-30 14:16:14.466703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.275 [2024-10-30 14:16:14.466792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.275 [2024-10-30 14:16:14.466814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.275 [2024-10-30 14:16:14.466821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.275 [2024-10-30 14:16:14.466828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.275 [2024-10-30 14:16:14.466843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.275 qpair failed and we were unable to recover it. 00:29:16.275 [2024-10-30 14:16:14.476608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.275 [2024-10-30 14:16:14.476659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.275 [2024-10-30 14:16:14.476673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.275 [2024-10-30 14:16:14.476680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.275 [2024-10-30 14:16:14.476686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.275 [2024-10-30 14:16:14.476702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.275 qpair failed and we were unable to recover it. 00:29:16.275 [2024-10-30 14:16:14.486749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.275 [2024-10-30 14:16:14.486791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.275 [2024-10-30 14:16:14.486805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.275 [2024-10-30 14:16:14.486812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.275 [2024-10-30 14:16:14.486818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.275 [2024-10-30 14:16:14.486833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.275 qpair failed and we were unable to recover it. 00:29:16.275 [2024-10-30 14:16:14.496649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.275 [2024-10-30 14:16:14.496695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.275 [2024-10-30 14:16:14.496709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.275 [2024-10-30 14:16:14.496716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.275 [2024-10-30 14:16:14.496722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.275 [2024-10-30 14:16:14.496737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.275 qpair failed and we were unable to recover it. 00:29:16.275 [2024-10-30 14:16:14.506809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.275 [2024-10-30 14:16:14.506857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.275 [2024-10-30 14:16:14.506870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.275 [2024-10-30 14:16:14.506877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.275 [2024-10-30 14:16:14.506884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.275 [2024-10-30 14:16:14.506898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.275 qpair failed and we were unable to recover it. 00:29:16.275 [2024-10-30 14:16:14.516837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.275 [2024-10-30 14:16:14.516895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.275 [2024-10-30 14:16:14.516908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.275 [2024-10-30 14:16:14.516915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.275 [2024-10-30 14:16:14.516922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.275 [2024-10-30 14:16:14.516936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.275 qpair failed and we were unable to recover it. 00:29:16.275 [2024-10-30 14:16:14.526846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.275 [2024-10-30 14:16:14.526942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.275 [2024-10-30 14:16:14.526955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.276 [2024-10-30 14:16:14.526962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.276 [2024-10-30 14:16:14.526969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.276 [2024-10-30 14:16:14.526983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.276 qpair failed and we were unable to recover it. 00:29:16.276 [2024-10-30 14:16:14.536886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.276 [2024-10-30 14:16:14.536932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.276 [2024-10-30 14:16:14.536945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.276 [2024-10-30 14:16:14.536952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.276 [2024-10-30 14:16:14.536959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.276 [2024-10-30 14:16:14.536973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.276 qpair failed and we were unable to recover it. 00:29:16.276 [2024-10-30 14:16:14.546910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.276 [2024-10-30 14:16:14.546956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.276 [2024-10-30 14:16:14.546973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.276 [2024-10-30 14:16:14.546980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.276 [2024-10-30 14:16:14.546986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.276 [2024-10-30 14:16:14.547000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.276 qpair failed and we were unable to recover it. 00:29:16.276 [2024-10-30 14:16:14.556933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.276 [2024-10-30 14:16:14.556977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.276 [2024-10-30 14:16:14.556991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.276 [2024-10-30 14:16:14.556998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.276 [2024-10-30 14:16:14.557004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.276 [2024-10-30 14:16:14.557018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.276 qpair failed and we were unable to recover it. 00:29:16.276 [2024-10-30 14:16:14.566963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.276 [2024-10-30 14:16:14.567008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.276 [2024-10-30 14:16:14.567021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.276 [2024-10-30 14:16:14.567028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.276 [2024-10-30 14:16:14.567034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.276 [2024-10-30 14:16:14.567048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.276 qpair failed and we were unable to recover it. 00:29:16.539 [2024-10-30 14:16:14.576995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.539 [2024-10-30 14:16:14.577036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.539 [2024-10-30 14:16:14.577049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.539 [2024-10-30 14:16:14.577056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.539 [2024-10-30 14:16:14.577062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.539 [2024-10-30 14:16:14.577076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.539 qpair failed and we were unable to recover it. 00:29:16.540 [2024-10-30 14:16:14.587024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.540 [2024-10-30 14:16:14.587069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.540 [2024-10-30 14:16:14.587082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.540 [2024-10-30 14:16:14.587093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.540 [2024-10-30 14:16:14.587099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.540 [2024-10-30 14:16:14.587113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.540 qpair failed and we were unable to recover it. 00:29:16.540 [2024-10-30 14:16:14.597065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.540 [2024-10-30 14:16:14.597114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.540 [2024-10-30 14:16:14.597126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.540 [2024-10-30 14:16:14.597133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.540 [2024-10-30 14:16:14.597140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.540 [2024-10-30 14:16:14.597154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.540 qpair failed and we were unable to recover it. 00:29:16.540 [2024-10-30 14:16:14.607057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.540 [2024-10-30 14:16:14.607101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.540 [2024-10-30 14:16:14.607114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.540 [2024-10-30 14:16:14.607121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.540 [2024-10-30 14:16:14.607128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.540 [2024-10-30 14:16:14.607142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.540 qpair failed and we were unable to recover it. 00:29:16.540 [2024-10-30 14:16:14.617098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.540 [2024-10-30 14:16:14.617141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.540 [2024-10-30 14:16:14.617155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.540 [2024-10-30 14:16:14.617162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.540 [2024-10-30 14:16:14.617168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.540 [2024-10-30 14:16:14.617182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.540 qpair failed and we were unable to recover it. 00:29:16.540 [2024-10-30 14:16:14.627121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.540 [2024-10-30 14:16:14.627176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.540 [2024-10-30 14:16:14.627189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.540 [2024-10-30 14:16:14.627196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.540 [2024-10-30 14:16:14.627202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.540 [2024-10-30 14:16:14.627220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.540 qpair failed and we were unable to recover it. 00:29:16.540 [2024-10-30 14:16:14.637156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.540 [2024-10-30 14:16:14.637204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.540 [2024-10-30 14:16:14.637217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.540 [2024-10-30 14:16:14.637224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.540 [2024-10-30 14:16:14.637230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.540 [2024-10-30 14:16:14.637244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.540 qpair failed and we were unable to recover it. 00:29:16.540 [2024-10-30 14:16:14.647180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.540 [2024-10-30 14:16:14.647225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.540 [2024-10-30 14:16:14.647238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.540 [2024-10-30 14:16:14.647245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.540 [2024-10-30 14:16:14.647252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.540 [2024-10-30 14:16:14.647265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.540 qpair failed and we were unable to recover it. 00:29:16.540 [2024-10-30 14:16:14.657194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.540 [2024-10-30 14:16:14.657234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.540 [2024-10-30 14:16:14.657247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.540 [2024-10-30 14:16:14.657254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.540 [2024-10-30 14:16:14.657260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.540 [2024-10-30 14:16:14.657275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.540 qpair failed and we were unable to recover it. 00:29:16.540 [2024-10-30 14:16:14.667239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.540 [2024-10-30 14:16:14.667286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.540 [2024-10-30 14:16:14.667299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.540 [2024-10-30 14:16:14.667306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.540 [2024-10-30 14:16:14.667313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.540 [2024-10-30 14:16:14.667326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.540 qpair failed and we were unable to recover it. 00:29:16.540 [2024-10-30 14:16:14.677269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.540 [2024-10-30 14:16:14.677325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.540 [2024-10-30 14:16:14.677338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.540 [2024-10-30 14:16:14.677345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.540 [2024-10-30 14:16:14.677352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.540 [2024-10-30 14:16:14.677365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.540 qpair failed and we were unable to recover it. 00:29:16.540 [2024-10-30 14:16:14.687163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.540 [2024-10-30 14:16:14.687231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.540 [2024-10-30 14:16:14.687244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.540 [2024-10-30 14:16:14.687251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.540 [2024-10-30 14:16:14.687258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.540 [2024-10-30 14:16:14.687272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.540 qpair failed and we were unable to recover it. 00:29:16.540 [2024-10-30 14:16:14.697318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.540 [2024-10-30 14:16:14.697363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.540 [2024-10-30 14:16:14.697376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.540 [2024-10-30 14:16:14.697383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.540 [2024-10-30 14:16:14.697389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.540 [2024-10-30 14:16:14.697403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.540 qpair failed and we were unable to recover it. 00:29:16.540 [2024-10-30 14:16:14.707352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.540 [2024-10-30 14:16:14.707396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.540 [2024-10-30 14:16:14.707410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.540 [2024-10-30 14:16:14.707417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.540 [2024-10-30 14:16:14.707423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.540 [2024-10-30 14:16:14.707438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.540 qpair failed and we were unable to recover it. 00:29:16.540 [2024-10-30 14:16:14.717380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.540 [2024-10-30 14:16:14.717428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.540 [2024-10-30 14:16:14.717442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.541 [2024-10-30 14:16:14.717452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.541 [2024-10-30 14:16:14.717458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.541 [2024-10-30 14:16:14.717473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.541 qpair failed and we were unable to recover it. 00:29:16.541 [2024-10-30 14:16:14.727356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.541 [2024-10-30 14:16:14.727399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.541 [2024-10-30 14:16:14.727412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.541 [2024-10-30 14:16:14.727419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.541 [2024-10-30 14:16:14.727425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.541 [2024-10-30 14:16:14.727439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.541 qpair failed and we were unable to recover it. 00:29:16.541 [2024-10-30 14:16:14.737434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.541 [2024-10-30 14:16:14.737479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.541 [2024-10-30 14:16:14.737492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.541 [2024-10-30 14:16:14.737499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.541 [2024-10-30 14:16:14.737506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.541 [2024-10-30 14:16:14.737520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.541 qpair failed and we were unable to recover it. 00:29:16.541 [2024-10-30 14:16:14.747444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.541 [2024-10-30 14:16:14.747491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.541 [2024-10-30 14:16:14.747504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.541 [2024-10-30 14:16:14.747511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.541 [2024-10-30 14:16:14.747518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.541 [2024-10-30 14:16:14.747532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.541 qpair failed and we were unable to recover it. 00:29:16.541 [2024-10-30 14:16:14.757479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.541 [2024-10-30 14:16:14.757523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.541 [2024-10-30 14:16:14.757537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.541 [2024-10-30 14:16:14.757544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.541 [2024-10-30 14:16:14.757550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.541 [2024-10-30 14:16:14.757568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.541 qpair failed and we were unable to recover it. 00:29:16.541 [2024-10-30 14:16:14.767494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.541 [2024-10-30 14:16:14.767538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.541 [2024-10-30 14:16:14.767552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.541 [2024-10-30 14:16:14.767559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.541 [2024-10-30 14:16:14.767565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.541 [2024-10-30 14:16:14.767579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.541 qpair failed and we were unable to recover it. 00:29:16.541 [2024-10-30 14:16:14.777521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.541 [2024-10-30 14:16:14.777595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.541 [2024-10-30 14:16:14.777608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.541 [2024-10-30 14:16:14.777615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.541 [2024-10-30 14:16:14.777621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.541 [2024-10-30 14:16:14.777635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.541 qpair failed and we were unable to recover it. 00:29:16.541 [2024-10-30 14:16:14.787539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.541 [2024-10-30 14:16:14.787582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.541 [2024-10-30 14:16:14.787595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.541 [2024-10-30 14:16:14.787602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.541 [2024-10-30 14:16:14.787608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.541 [2024-10-30 14:16:14.787622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.541 qpair failed and we were unable to recover it. 00:29:16.541 [2024-10-30 14:16:14.797587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.541 [2024-10-30 14:16:14.797644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.541 [2024-10-30 14:16:14.797658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.541 [2024-10-30 14:16:14.797665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.541 [2024-10-30 14:16:14.797671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.541 [2024-10-30 14:16:14.797685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.541 qpair failed and we were unable to recover it. 00:29:16.541 [2024-10-30 14:16:14.807588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.541 [2024-10-30 14:16:14.807644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.541 [2024-10-30 14:16:14.807657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.541 [2024-10-30 14:16:14.807665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.541 [2024-10-30 14:16:14.807671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.541 [2024-10-30 14:16:14.807685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.541 qpair failed and we were unable to recover it. 00:29:16.541 [2024-10-30 14:16:14.817628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.541 [2024-10-30 14:16:14.817719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.541 [2024-10-30 14:16:14.817733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.541 [2024-10-30 14:16:14.817740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.541 [2024-10-30 14:16:14.817752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.541 [2024-10-30 14:16:14.817767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.541 qpair failed and we were unable to recover it. 00:29:16.541 [2024-10-30 14:16:14.827662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.541 [2024-10-30 14:16:14.827711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.541 [2024-10-30 14:16:14.827724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.541 [2024-10-30 14:16:14.827731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.541 [2024-10-30 14:16:14.827738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.541 [2024-10-30 14:16:14.827755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.541 qpair failed and we were unable to recover it. 00:29:16.541 [2024-10-30 14:16:14.837709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.804 [2024-10-30 14:16:14.837802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.804 [2024-10-30 14:16:14.837815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.804 [2024-10-30 14:16:14.837823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.804 [2024-10-30 14:16:14.837832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.804 [2024-10-30 14:16:14.837848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.804 qpair failed and we were unable to recover it. 00:29:16.804 [2024-10-30 14:16:14.847707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.804 [2024-10-30 14:16:14.847804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.804 [2024-10-30 14:16:14.847821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.804 [2024-10-30 14:16:14.847828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.804 [2024-10-30 14:16:14.847834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.804 [2024-10-30 14:16:14.847848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.804 qpair failed and we were unable to recover it. 00:29:16.804 [2024-10-30 14:16:14.857758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.804 [2024-10-30 14:16:14.857833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.804 [2024-10-30 14:16:14.857847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.804 [2024-10-30 14:16:14.857855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.804 [2024-10-30 14:16:14.857861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.804 [2024-10-30 14:16:14.857875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.804 qpair failed and we were unable to recover it. 00:29:16.804 [2024-10-30 14:16:14.867775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.804 [2024-10-30 14:16:14.867820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.804 [2024-10-30 14:16:14.867833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.804 [2024-10-30 14:16:14.867840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.804 [2024-10-30 14:16:14.867847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.804 [2024-10-30 14:16:14.867861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.804 qpair failed and we were unable to recover it. 00:29:16.804 [2024-10-30 14:16:14.877907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.804 [2024-10-30 14:16:14.877997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.804 [2024-10-30 14:16:14.878010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.804 [2024-10-30 14:16:14.878017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.804 [2024-10-30 14:16:14.878024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.804 [2024-10-30 14:16:14.878038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.804 qpair failed and we were unable to recover it. 00:29:16.804 [2024-10-30 14:16:14.887829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.804 [2024-10-30 14:16:14.887874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.804 [2024-10-30 14:16:14.887888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.804 [2024-10-30 14:16:14.887895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.804 [2024-10-30 14:16:14.887908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.804 [2024-10-30 14:16:14.887923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.804 qpair failed and we were unable to recover it. 00:29:16.804 [2024-10-30 14:16:14.897882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.804 [2024-10-30 14:16:14.897924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.804 [2024-10-30 14:16:14.897938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.804 [2024-10-30 14:16:14.897944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.804 [2024-10-30 14:16:14.897951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.804 [2024-10-30 14:16:14.897965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.804 qpair failed and we were unable to recover it. 00:29:16.804 [2024-10-30 14:16:14.907889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.804 [2024-10-30 14:16:14.907933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.804 [2024-10-30 14:16:14.907946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.804 [2024-10-30 14:16:14.907954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.804 [2024-10-30 14:16:14.907960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.804 [2024-10-30 14:16:14.907974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.804 qpair failed and we were unable to recover it. 00:29:16.804 [2024-10-30 14:16:14.917908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.804 [2024-10-30 14:16:14.917958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.804 [2024-10-30 14:16:14.917972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.804 [2024-10-30 14:16:14.917979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.804 [2024-10-30 14:16:14.917985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.804 [2024-10-30 14:16:14.918000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.804 qpair failed and we were unable to recover it. 00:29:16.804 [2024-10-30 14:16:14.927928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.804 [2024-10-30 14:16:14.927977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.804 [2024-10-30 14:16:14.927990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.804 [2024-10-30 14:16:14.927997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.804 [2024-10-30 14:16:14.928004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.804 [2024-10-30 14:16:14.928018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.804 qpair failed and we were unable to recover it. 00:29:16.804 [2024-10-30 14:16:14.937843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.804 [2024-10-30 14:16:14.937885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.804 [2024-10-30 14:16:14.937899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.804 [2024-10-30 14:16:14.937906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.804 [2024-10-30 14:16:14.937913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.804 [2024-10-30 14:16:14.937927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.804 qpair failed and we were unable to recover it. 00:29:16.804 [2024-10-30 14:16:14.947978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.804 [2024-10-30 14:16:14.948027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.804 [2024-10-30 14:16:14.948041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.804 [2024-10-30 14:16:14.948048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.804 [2024-10-30 14:16:14.948055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.804 [2024-10-30 14:16:14.948069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.804 qpair failed and we were unable to recover it. 00:29:16.804 [2024-10-30 14:16:14.958028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.804 [2024-10-30 14:16:14.958077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.804 [2024-10-30 14:16:14.958091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.804 [2024-10-30 14:16:14.958098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.805 [2024-10-30 14:16:14.958105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.805 [2024-10-30 14:16:14.958119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.805 qpair failed and we were unable to recover it. 00:29:16.805 [2024-10-30 14:16:14.968016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.805 [2024-10-30 14:16:14.968071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.805 [2024-10-30 14:16:14.968084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.805 [2024-10-30 14:16:14.968091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.805 [2024-10-30 14:16:14.968097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.805 [2024-10-30 14:16:14.968111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.805 qpair failed and we were unable to recover it. 00:29:16.805 [2024-10-30 14:16:14.978023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.805 [2024-10-30 14:16:14.978069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.805 [2024-10-30 14:16:14.978085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.805 [2024-10-30 14:16:14.978092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.805 [2024-10-30 14:16:14.978099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.805 [2024-10-30 14:16:14.978113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.805 qpair failed and we were unable to recover it. 00:29:16.805 [2024-10-30 14:16:14.988088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.805 [2024-10-30 14:16:14.988134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.805 [2024-10-30 14:16:14.988148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.805 [2024-10-30 14:16:14.988155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.805 [2024-10-30 14:16:14.988161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.805 [2024-10-30 14:16:14.988175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.805 qpair failed and we were unable to recover it. 00:29:16.805 [2024-10-30 14:16:14.998081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.805 [2024-10-30 14:16:14.998131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.805 [2024-10-30 14:16:14.998145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.805 [2024-10-30 14:16:14.998152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.805 [2024-10-30 14:16:14.998158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.805 [2024-10-30 14:16:14.998172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.805 qpair failed and we were unable to recover it. 00:29:16.805 [2024-10-30 14:16:15.008136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.805 [2024-10-30 14:16:15.008177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.805 [2024-10-30 14:16:15.008190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.805 [2024-10-30 14:16:15.008197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.805 [2024-10-30 14:16:15.008203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.805 [2024-10-30 14:16:15.008217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.805 qpair failed and we were unable to recover it. 00:29:16.805 [2024-10-30 14:16:15.018151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.805 [2024-10-30 14:16:15.018191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.805 [2024-10-30 14:16:15.018204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.805 [2024-10-30 14:16:15.018211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.805 [2024-10-30 14:16:15.018221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.805 [2024-10-30 14:16:15.018236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.805 qpair failed and we were unable to recover it. 00:29:16.805 [2024-10-30 14:16:15.028198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.805 [2024-10-30 14:16:15.028247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.805 [2024-10-30 14:16:15.028260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.805 [2024-10-30 14:16:15.028267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.805 [2024-10-30 14:16:15.028273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.805 [2024-10-30 14:16:15.028287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.805 qpair failed and we were unable to recover it. 00:29:16.805 [2024-10-30 14:16:15.038232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.805 [2024-10-30 14:16:15.038282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.805 [2024-10-30 14:16:15.038296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.805 [2024-10-30 14:16:15.038302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.805 [2024-10-30 14:16:15.038309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.805 [2024-10-30 14:16:15.038323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.805 qpair failed and we were unable to recover it. 00:29:16.805 [2024-10-30 14:16:15.048251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.805 [2024-10-30 14:16:15.048296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.805 [2024-10-30 14:16:15.048309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.805 [2024-10-30 14:16:15.048316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.805 [2024-10-30 14:16:15.048322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.805 [2024-10-30 14:16:15.048336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.805 qpair failed and we were unable to recover it. 00:29:16.805 [2024-10-30 14:16:15.058264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.805 [2024-10-30 14:16:15.058305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.805 [2024-10-30 14:16:15.058318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.805 [2024-10-30 14:16:15.058325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.805 [2024-10-30 14:16:15.058331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.805 [2024-10-30 14:16:15.058346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.805 qpair failed and we were unable to recover it. 00:29:16.805 [2024-10-30 14:16:15.068302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.805 [2024-10-30 14:16:15.068356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.805 [2024-10-30 14:16:15.068370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.805 [2024-10-30 14:16:15.068377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.805 [2024-10-30 14:16:15.068383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.805 [2024-10-30 14:16:15.068397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.805 qpair failed and we were unable to recover it. 00:29:16.805 [2024-10-30 14:16:15.078335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.805 [2024-10-30 14:16:15.078377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.805 [2024-10-30 14:16:15.078391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.805 [2024-10-30 14:16:15.078398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.805 [2024-10-30 14:16:15.078404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.805 [2024-10-30 14:16:15.078419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.805 qpair failed and we were unable to recover it. 00:29:16.805 [2024-10-30 14:16:15.088466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.805 [2024-10-30 14:16:15.088510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.805 [2024-10-30 14:16:15.088523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.805 [2024-10-30 14:16:15.088530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.805 [2024-10-30 14:16:15.088537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.805 [2024-10-30 14:16:15.088551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.806 qpair failed and we were unable to recover it. 00:29:16.806 [2024-10-30 14:16:15.098389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.806 [2024-10-30 14:16:15.098434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.806 [2024-10-30 14:16:15.098448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.806 [2024-10-30 14:16:15.098454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.806 [2024-10-30 14:16:15.098461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:16.806 [2024-10-30 14:16:15.098475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.806 qpair failed and we were unable to recover it. 00:29:17.067 [2024-10-30 14:16:15.108372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.067 [2024-10-30 14:16:15.108419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.067 [2024-10-30 14:16:15.108433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.067 [2024-10-30 14:16:15.108440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.067 [2024-10-30 14:16:15.108446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:17.067 [2024-10-30 14:16:15.108460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.067 qpair failed and we were unable to recover it. 00:29:17.067 [2024-10-30 14:16:15.118449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.067 [2024-10-30 14:16:15.118496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.067 [2024-10-30 14:16:15.118509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.067 [2024-10-30 14:16:15.118516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.067 [2024-10-30 14:16:15.118522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:17.067 [2024-10-30 14:16:15.118536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.067 qpair failed and we were unable to recover it. 00:29:17.067 [2024-10-30 14:16:15.128497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.067 [2024-10-30 14:16:15.128546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.067 [2024-10-30 14:16:15.128560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.067 [2024-10-30 14:16:15.128567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.067 [2024-10-30 14:16:15.128573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:17.067 [2024-10-30 14:16:15.128587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.067 qpair failed and we were unable to recover it. 00:29:17.067 [2024-10-30 14:16:15.138486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.067 [2024-10-30 14:16:15.138531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.067 [2024-10-30 14:16:15.138545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.067 [2024-10-30 14:16:15.138552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.067 [2024-10-30 14:16:15.138558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:17.067 [2024-10-30 14:16:15.138572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.067 qpair failed and we were unable to recover it. 00:29:17.067 [2024-10-30 14:16:15.148398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.067 [2024-10-30 14:16:15.148445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.068 [2024-10-30 14:16:15.148459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.068 [2024-10-30 14:16:15.148469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.068 [2024-10-30 14:16:15.148476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9da8000b90 00:29:17.068 [2024-10-30 14:16:15.148490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.068 qpair failed and we were unable to recover it. 00:29:17.068 [2024-10-30 14:16:15.148641] nvme_ctrlr.c:4482:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:29:17.068 A controller has encountered a failure and is being reset. 00:29:17.068 [2024-10-30 14:16:15.148819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7bbf30 (9): Bad file descriptor 00:29:17.068 Controller properly reset. 00:29:17.068 Initializing NVMe Controllers 00:29:17.068 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:17.068 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:17.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:17.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:17.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:17.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:17.068 Initialization complete. Launching workers. 00:29:17.068 Starting thread on core 1 00:29:17.068 Starting thread on core 2 00:29:17.068 Starting thread on core 3 00:29:17.068 Starting thread on core 0 00:29:17.068 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:17.068 00:29:17.068 real 0m11.472s 00:29:17.068 user 0m21.896s 00:29:17.068 sys 0m3.825s 00:29:17.068 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:17.068 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:17.068 ************************************ 00:29:17.068 END TEST nvmf_target_disconnect_tc2 00:29:17.068 ************************************ 00:29:17.068 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:17.068 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:17.068 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:17.068 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:17.068 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:17.068 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:17.068 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:17.068 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:17.068 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:17.068 rmmod nvme_tcp 00:29:17.068 rmmod nvme_fabrics 00:29:17.068 rmmod nvme_keyring 00:29:17.068 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:17.068 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:17.068 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:17.068 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1211394 ']' 00:29:17.068 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1211394 00:29:17.068 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1211394 ']' 00:29:17.068 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1211394 00:29:17.068 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:29:17.068 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:17.068 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1211394 00:29:17.329 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:29:17.329 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:29:17.329 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1211394' 00:29:17.329 killing process with pid 1211394 00:29:17.329 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1211394 00:29:17.329 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1211394 00:29:17.329 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:17.329 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:17.329 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:17.329 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:17.329 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:29:17.329 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:17.329 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:29:17.329 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:17.329 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:17.329 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.329 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.329 14:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.874 14:16:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:19.874 00:29:19.874 real 0m21.832s 00:29:19.874 user 0m49.710s 00:29:19.874 sys 0m10.107s 00:29:19.874 14:16:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:19.874 14:16:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:19.874 ************************************ 00:29:19.874 END TEST nvmf_target_disconnect 00:29:19.874 ************************************ 00:29:19.874 14:16:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:19.874 00:29:19.874 real 6m32.832s 00:29:19.874 user 11m21.590s 00:29:19.874 sys 2m15.638s 00:29:19.874 14:16:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:19.874 14:16:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.874 ************************************ 00:29:19.874 END TEST nvmf_host 00:29:19.874 ************************************ 00:29:19.874 14:16:17 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:19.874 14:16:17 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:19.874 14:16:17 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:19.874 14:16:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:19.874 14:16:17 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:19.874 14:16:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:19.874 ************************************ 00:29:19.874 START TEST nvmf_target_core_interrupt_mode 00:29:19.874 ************************************ 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:19.874 * Looking for test storage... 00:29:19.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:19.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.874 --rc genhtml_branch_coverage=1 00:29:19.874 --rc genhtml_function_coverage=1 00:29:19.874 --rc genhtml_legend=1 00:29:19.874 --rc geninfo_all_blocks=1 00:29:19.874 --rc geninfo_unexecuted_blocks=1 00:29:19.874 00:29:19.874 ' 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:19.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.874 --rc genhtml_branch_coverage=1 00:29:19.874 --rc genhtml_function_coverage=1 00:29:19.874 --rc genhtml_legend=1 00:29:19.874 --rc geninfo_all_blocks=1 00:29:19.874 --rc geninfo_unexecuted_blocks=1 00:29:19.874 00:29:19.874 ' 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:19.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.874 --rc genhtml_branch_coverage=1 00:29:19.874 --rc genhtml_function_coverage=1 00:29:19.874 --rc genhtml_legend=1 00:29:19.874 --rc geninfo_all_blocks=1 00:29:19.874 --rc geninfo_unexecuted_blocks=1 00:29:19.874 00:29:19.874 ' 00:29:19.874 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:19.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.874 --rc genhtml_branch_coverage=1 00:29:19.874 --rc genhtml_function_coverage=1 00:29:19.874 --rc genhtml_legend=1 00:29:19.874 --rc geninfo_all_blocks=1 00:29:19.875 --rc geninfo_unexecuted_blocks=1 00:29:19.875 00:29:19.875 ' 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:19.875 14:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:19.875 ************************************ 00:29:19.875 START TEST nvmf_abort 00:29:19.875 ************************************ 00:29:19.875 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:19.875 * Looking for test storage... 00:29:19.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:19.875 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:19.875 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:29:19.875 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:20.137 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:20.137 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:20.137 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:20.137 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:20.137 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:20.137 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:20.137 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:20.137 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:20.137 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:20.137 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:20.137 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:20.137 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:20.137 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:20.137 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:20.137 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:20.137 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:20.137 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:20.137 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:20.137 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:20.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.138 --rc genhtml_branch_coverage=1 00:29:20.138 --rc genhtml_function_coverage=1 00:29:20.138 --rc genhtml_legend=1 00:29:20.138 --rc geninfo_all_blocks=1 00:29:20.138 --rc geninfo_unexecuted_blocks=1 00:29:20.138 00:29:20.138 ' 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:20.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.138 --rc genhtml_branch_coverage=1 00:29:20.138 --rc genhtml_function_coverage=1 00:29:20.138 --rc genhtml_legend=1 00:29:20.138 --rc geninfo_all_blocks=1 00:29:20.138 --rc geninfo_unexecuted_blocks=1 00:29:20.138 00:29:20.138 ' 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:20.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.138 --rc genhtml_branch_coverage=1 00:29:20.138 --rc genhtml_function_coverage=1 00:29:20.138 --rc genhtml_legend=1 00:29:20.138 --rc geninfo_all_blocks=1 00:29:20.138 --rc geninfo_unexecuted_blocks=1 00:29:20.138 00:29:20.138 ' 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:20.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.138 --rc genhtml_branch_coverage=1 00:29:20.138 --rc genhtml_function_coverage=1 00:29:20.138 --rc genhtml_legend=1 00:29:20.138 --rc geninfo_all_blocks=1 00:29:20.138 --rc geninfo_unexecuted_blocks=1 00:29:20.138 00:29:20.138 ' 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:20.138 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:20.139 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:20.139 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:20.139 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:20.139 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:20.139 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:20.139 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:20.139 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:20.139 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:20.139 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:20.139 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:20.139 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.139 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:20.139 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.139 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:20.139 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:20.139 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:20.139 14:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:28.393 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:28.393 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:28.394 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:28.394 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:28.394 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:28.394 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:28.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:28.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:29:28.394 00:29:28.394 --- 10.0.0.2 ping statistics --- 00:29:28.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.394 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:28.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:28.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:29:28.394 00:29:28.394 --- 10.0.0.1 ping statistics --- 00:29:28.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.394 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:28.394 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:29:28.395 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:28.395 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:28.395 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:28.395 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:28.395 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:28.395 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:28.395 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:28.395 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:28.395 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:28.395 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:28.395 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:28.395 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1217038 00:29:28.395 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1217038 00:29:28.395 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:28.395 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1217038 ']' 00:29:28.395 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.395 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:28.395 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.395 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:28.395 14:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:28.395 [2024-10-30 14:16:25.779701] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:28.395 [2024-10-30 14:16:25.780854] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:29:28.395 [2024-10-30 14:16:25.780908] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:28.395 [2024-10-30 14:16:25.880469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:28.395 [2024-10-30 14:16:25.932833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.395 [2024-10-30 14:16:25.932884] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.395 [2024-10-30 14:16:25.932900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.395 [2024-10-30 14:16:25.932907] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.395 [2024-10-30 14:16:25.932913] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.395 [2024-10-30 14:16:25.934951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:28.395 [2024-10-30 14:16:25.935208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:28.395 [2024-10-30 14:16:25.935209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.395 [2024-10-30 14:16:26.011015] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:28.395 [2024-10-30 14:16:26.012075] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:28.395 [2024-10-30 14:16:26.012743] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:28.395 [2024-10-30 14:16:26.012792] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:28.395 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.395 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:29:28.395 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:28.395 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:28.395 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:28.395 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:28.395 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:28.395 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.395 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:28.395 [2024-10-30 14:16:26.648261] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:28.395 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.395 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:28.395 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.395 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:28.656 Malloc0 00:29:28.656 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.656 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:28.656 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.656 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:28.656 Delay0 00:29:28.656 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.656 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:28.656 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.656 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:28.656 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.656 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:28.656 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.656 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:28.656 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.656 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:28.656 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.656 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:28.656 [2024-10-30 14:16:26.748206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.656 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.656 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:28.656 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.656 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:28.656 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.656 14:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:28.656 [2024-10-30 14:16:26.932830] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:31.211 Initializing NVMe Controllers 00:29:31.211 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:31.211 controller IO queue size 128 less than required 00:29:31.211 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:31.211 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:31.211 Initialization complete. Launching workers. 00:29:31.211 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28714 00:29:31.211 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28771, failed to submit 66 00:29:31.211 success 28714, unsuccessful 57, failed 0 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:31.211 rmmod nvme_tcp 00:29:31.211 rmmod nvme_fabrics 00:29:31.211 rmmod nvme_keyring 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1217038 ']' 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1217038 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1217038 ']' 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1217038 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1217038 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1217038' 00:29:31.211 killing process with pid 1217038 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1217038 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1217038 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:31.211 14:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.125 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:33.386 00:29:33.386 real 0m13.390s 00:29:33.386 user 0m10.795s 00:29:33.386 sys 0m7.262s 00:29:33.386 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:33.386 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:33.386 ************************************ 00:29:33.386 END TEST nvmf_abort 00:29:33.386 ************************************ 00:29:33.386 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:33.386 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:33.386 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:33.386 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:33.386 ************************************ 00:29:33.386 START TEST nvmf_ns_hotplug_stress 00:29:33.386 ************************************ 00:29:33.386 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:33.386 * Looking for test storage... 00:29:33.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:33.386 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:33.386 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:29:33.386 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:33.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.648 --rc genhtml_branch_coverage=1 00:29:33.648 --rc genhtml_function_coverage=1 00:29:33.648 --rc genhtml_legend=1 00:29:33.648 --rc geninfo_all_blocks=1 00:29:33.648 --rc geninfo_unexecuted_blocks=1 00:29:33.648 00:29:33.648 ' 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:33.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.648 --rc genhtml_branch_coverage=1 00:29:33.648 --rc genhtml_function_coverage=1 00:29:33.648 --rc genhtml_legend=1 00:29:33.648 --rc geninfo_all_blocks=1 00:29:33.648 --rc geninfo_unexecuted_blocks=1 00:29:33.648 00:29:33.648 ' 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:33.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.648 --rc genhtml_branch_coverage=1 00:29:33.648 --rc genhtml_function_coverage=1 00:29:33.648 --rc genhtml_legend=1 00:29:33.648 --rc geninfo_all_blocks=1 00:29:33.648 --rc geninfo_unexecuted_blocks=1 00:29:33.648 00:29:33.648 ' 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:33.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.648 --rc genhtml_branch_coverage=1 00:29:33.648 --rc genhtml_function_coverage=1 00:29:33.648 --rc genhtml_legend=1 00:29:33.648 --rc geninfo_all_blocks=1 00:29:33.648 --rc geninfo_unexecuted_blocks=1 00:29:33.648 00:29:33.648 ' 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:33.648 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:29:33.649 14:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:41.874 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:41.874 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:41.874 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:41.874 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:41.874 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:41.875 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:41.875 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:41.875 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:41.875 14:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:41.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:41.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:29:41.875 00:29:41.875 --- 10.0.0.2 ping statistics --- 00:29:41.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.875 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:41.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:41.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:29:41.875 00:29:41.875 --- 10.0.0.1 ping statistics --- 00:29:41.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.875 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1221834 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1221834 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1221834 ']' 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:41.875 14:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:41.875 [2024-10-30 14:16:39.281021] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:41.875 [2024-10-30 14:16:39.282184] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:29:41.875 [2024-10-30 14:16:39.282236] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.875 [2024-10-30 14:16:39.380910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:41.875 [2024-10-30 14:16:39.432676] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.875 [2024-10-30 14:16:39.432730] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.875 [2024-10-30 14:16:39.432739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.875 [2024-10-30 14:16:39.432754] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.875 [2024-10-30 14:16:39.432761] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.875 [2024-10-30 14:16:39.434817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:41.875 [2024-10-30 14:16:39.435021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.875 [2024-10-30 14:16:39.435021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:41.875 [2024-10-30 14:16:39.511180] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:41.875 [2024-10-30 14:16:39.512342] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:41.875 [2024-10-30 14:16:39.512847] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:41.875 [2024-10-30 14:16:39.512993] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:41.875 14:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:41.875 14:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:29:41.875 14:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:41.875 14:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:41.875 14:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:41.875 14:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:41.875 14:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:41.875 14:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:42.137 [2024-10-30 14:16:40.303942] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.137 14:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:42.399 14:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:42.399 [2024-10-30 14:16:40.684724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.660 14:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:42.660 14:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:42.922 Malloc0 00:29:42.922 14:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:43.182 Delay0 00:29:43.182 14:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:43.182 14:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:43.444 NULL1 00:29:43.444 14:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:43.706 14:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:43.706 14:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1222241 00:29:43.706 14:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:29:43.706 14:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:45.095 Read completed with error (sct=0, sc=11) 00:29:45.095 14:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:45.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:45.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:45.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:45.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:45.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:45.095 14:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:45.095 14:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:45.095 true 00:29:45.356 14:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:29:45.356 14:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:45.929 14:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:46.190 14:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:46.190 14:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:46.450 true 00:29:46.450 14:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:29:46.450 14:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:46.711 14:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:46.711 14:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:46.711 14:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:46.971 true 00:29:46.971 14:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:29:46.971 14:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:47.913 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:48.174 14:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:48.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:48.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:48.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:48.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:48.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:48.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:48.174 14:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:48.174 14:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:48.435 true 00:29:48.435 14:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:29:48.435 14:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:49.376 14:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:49.376 14:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:49.376 14:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:49.637 true 00:29:49.637 14:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:29:49.637 14:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:49.898 14:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:49.898 14:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:49.898 14:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:50.159 true 00:29:50.159 14:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:29:50.159 14:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:50.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:50.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:50.419 14:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:50.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:50.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:50.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:50.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:50.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:50.679 14:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:50.679 14:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:50.679 true 00:29:50.679 14:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:29:50.679 14:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:51.621 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:51.621 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:51.621 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:51.621 14:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:51.882 true 00:29:51.882 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:29:51.882 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:52.144 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:52.406 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:52.406 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:52.406 true 00:29:52.406 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:29:52.406 14:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:53.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:53.819 14:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:53.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:53.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:53.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:53.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:53.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:53.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:53.819 14:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:53.819 14:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:54.078 true 00:29:54.078 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:29:54.078 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:55.019 14:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:55.019 14:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:55.019 14:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:55.280 true 00:29:55.280 14:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:29:55.280 14:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:55.280 14:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:55.539 14:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:55.539 14:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:55.799 true 00:29:55.799 14:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:29:55.799 14:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:57.183 14:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:57.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:57.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:57.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:57.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:57.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:57.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:57.183 14:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:57.183 14:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:57.183 true 00:29:57.183 14:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:29:57.183 14:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.125 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:58.125 14:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:58.125 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:58.386 14:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:58.386 14:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:58.386 true 00:29:58.386 14:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:29:58.386 14:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.646 14:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:58.906 14:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:58.906 14:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:58.906 true 00:29:59.168 14:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:29:59.168 14:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:59.168 14:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:59.428 14:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:59.428 14:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:59.690 true 00:29:59.690 14:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:29:59.690 14:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:59.690 14:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:59.951 14:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:59.951 14:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:00.211 true 00:30:00.211 14:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:30:00.211 14:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:01.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:01.152 14:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:01.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:01.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:01.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:01.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:01.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:01.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:01.413 14:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:01.413 14:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:01.673 true 00:30:01.673 14:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:30:01.673 14:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.613 14:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.613 14:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:02.613 14:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:02.873 true 00:30:02.873 14:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:30:02.873 14:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.134 14:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.134 14:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:03.135 14:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:03.396 true 00:30:03.396 14:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:30:03.396 14:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:04.598 14:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:04.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:04.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:04.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:04.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:04.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:04.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:04.598 14:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:04.598 14:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:04.859 true 00:30:04.859 14:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:30:04.859 14:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.800 14:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:05.800 14:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:05.800 14:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:06.059 true 00:30:06.059 14:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:30:06.059 14:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:06.319 14:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:06.319 14:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:06.319 14:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:06.578 true 00:30:06.579 14:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:30:06.579 14:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.961 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.961 14:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:07.961 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.961 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.961 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.961 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.961 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.961 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.961 14:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:07.961 14:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:07.961 true 00:30:07.961 14:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:30:07.961 14:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.904 14:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:09.164 14:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:09.164 14:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:09.164 true 00:30:09.164 14:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:30:09.164 14:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:09.424 14:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:09.684 14:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:09.684 14:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:09.684 true 00:30:09.684 14:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:30:09.684 14:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:11.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:11.071 14:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:11.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:11.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:11.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:11.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:11.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:11.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:11.071 14:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:11.071 14:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:11.331 true 00:30:11.331 14:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:30:11.331 14:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:12.270 14:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:12.270 14:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:12.270 14:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:12.530 true 00:30:12.530 14:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:30:12.530 14:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.791 14:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:12.791 14:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:12.791 14:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:13.051 true 00:30:13.051 14:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:30:13.051 14:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.436 Initializing NVMe Controllers 00:30:14.436 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:14.436 Controller IO queue size 128, less than required. 00:30:14.436 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:14.436 Controller IO queue size 128, less than required. 00:30:14.436 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:14.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:14.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:14.436 Initialization complete. Launching workers. 00:30:14.436 ======================================================== 00:30:14.436 Latency(us) 00:30:14.436 Device Information : IOPS MiB/s Average min max 00:30:14.436 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2308.47 1.13 37681.89 1638.56 1050142.47 00:30:14.436 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 19686.53 9.61 6501.87 1129.29 399195.19 00:30:14.436 ======================================================== 00:30:14.436 Total : 21995.00 10.74 9774.34 1129.29 1050142.47 00:30:14.436 00:30:14.436 14:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.436 14:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:14.436 14:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:14.436 true 00:30:14.698 14:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1222241 00:30:14.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1222241) - No such process 00:30:14.698 14:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1222241 00:30:14.698 14:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.698 14:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:14.958 14:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:14.958 14:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:14.958 14:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:14.958 14:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:14.958 14:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:15.220 null0 00:30:15.220 14:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:15.220 14:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:15.220 14:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:15.220 null1 00:30:15.220 14:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:15.220 14:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:15.220 14:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:15.481 null2 00:30:15.481 14:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:15.481 14:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:15.481 14:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:15.481 null3 00:30:15.742 14:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:15.742 14:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:15.742 14:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:15.742 null4 00:30:15.742 14:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:15.742 14:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:15.742 14:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:16.003 null5 00:30:16.003 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:16.003 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:16.003 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:16.003 null6 00:30:16.003 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:16.003 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:16.003 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:16.265 null7 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:16.265 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1228619 1228620 1228622 1228624 1228626 1228628 1228630 1228631 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.266 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:16.528 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.528 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:16.528 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:16.528 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:16.528 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:16.528 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:16.528 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:16.528 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:16.789 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.789 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.789 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:16.789 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.789 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.789 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:16.789 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.789 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.789 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:16.789 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.789 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.789 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:16.789 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.789 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.789 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:16.789 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.789 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.789 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:16.789 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.789 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.789 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:16.789 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.789 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.789 14:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:16.789 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.789 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:16.789 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:16.789 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:17.050 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:17.050 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:17.050 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:17.050 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:17.050 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.050 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.050 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:17.050 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.050 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.050 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:17.050 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.050 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.050 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:17.050 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.050 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.051 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:17.051 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.051 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.051 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:17.051 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.051 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.051 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:17.051 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.051 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.051 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:17.051 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.051 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.051 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:17.311 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.311 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:17.311 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:17.311 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:17.311 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:17.311 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:17.311 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:17.311 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:17.311 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.311 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.311 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:17.311 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.311 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.311 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:17.574 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:17.836 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.836 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.836 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:17.836 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.836 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.836 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:17.836 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.836 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.836 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:17.836 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.836 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.836 14:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:17.836 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.836 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.836 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:17.836 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.836 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.836 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:17.836 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.836 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.836 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:17.836 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.836 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.836 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.836 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:17.836 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:18.098 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:18.098 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:18.098 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.098 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.098 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:18.098 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:18.098 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.098 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.098 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:18.098 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:18.098 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:18.098 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:18.098 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.098 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.098 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:18.360 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:18.622 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.622 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.622 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:18.622 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.622 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.622 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:18.622 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.622 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.622 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:18.622 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.622 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.622 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:18.622 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:18.622 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.622 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.622 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.622 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:18.622 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.622 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.622 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:18.622 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:18.883 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:18.883 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:18.884 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:18.884 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.884 14:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.884 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:18.884 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:18.884 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:18.884 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.884 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.884 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:18.884 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.884 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.884 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:18.884 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.884 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.884 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:18.884 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.884 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.884 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:18.884 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.884 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.884 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:18.884 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:19.145 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.146 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.146 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:19.146 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.146 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.146 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.146 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:19.146 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:19.146 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:19.146 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:19.146 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.146 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.146 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:19.146 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:19.146 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:19.146 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.146 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.146 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:19.408 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:19.408 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.408 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.408 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:19.408 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.408 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.408 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:19.408 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.408 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.408 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:19.408 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:19.409 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.409 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.409 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:19.409 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.409 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.409 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:19.409 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:19.409 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:19.409 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.409 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.409 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:19.409 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.409 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.409 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.409 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:19.409 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:19.409 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:19.671 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:19.671 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.671 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.671 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:19.671 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:19.671 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.671 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.671 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:19.671 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.671 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.671 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.671 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.671 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:19.671 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:19.671 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:19.671 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.671 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.671 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:19.934 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.934 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.934 14:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:19.934 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:19.934 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.934 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.934 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.934 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:19.934 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.934 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.934 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:19.934 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:19.934 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:19.934 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:19.934 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.934 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.934 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:19.934 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.934 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:20.197 rmmod nvme_tcp 00:30:20.197 rmmod nvme_fabrics 00:30:20.197 rmmod nvme_keyring 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1221834 ']' 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1221834 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1221834 ']' 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1221834 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:30:20.197 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:20.459 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1221834 00:30:20.459 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:20.459 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:20.459 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1221834' 00:30:20.459 killing process with pid 1221834 00:30:20.459 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1221834 00:30:20.459 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1221834 00:30:20.459 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:20.459 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:20.459 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:20.459 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:20.459 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:30:20.459 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:20.459 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:30:20.459 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:20.459 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:20.459 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.459 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:20.459 14:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.012 14:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:23.012 00:30:23.012 real 0m49.236s 00:30:23.012 user 2m57.684s 00:30:23.012 sys 0m21.230s 00:30:23.012 14:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:23.012 14:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:23.012 ************************************ 00:30:23.012 END TEST nvmf_ns_hotplug_stress 00:30:23.012 ************************************ 00:30:23.012 14:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:23.012 14:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:23.012 14:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:23.012 14:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:23.012 ************************************ 00:30:23.012 START TEST nvmf_delete_subsystem 00:30:23.012 ************************************ 00:30:23.012 14:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:23.012 * Looking for test storage... 00:30:23.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:23.013 14:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:23.013 14:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:30:23.013 14:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:23.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.013 --rc genhtml_branch_coverage=1 00:30:23.013 --rc genhtml_function_coverage=1 00:30:23.013 --rc genhtml_legend=1 00:30:23.013 --rc geninfo_all_blocks=1 00:30:23.013 --rc geninfo_unexecuted_blocks=1 00:30:23.013 00:30:23.013 ' 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:23.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.013 --rc genhtml_branch_coverage=1 00:30:23.013 --rc genhtml_function_coverage=1 00:30:23.013 --rc genhtml_legend=1 00:30:23.013 --rc geninfo_all_blocks=1 00:30:23.013 --rc geninfo_unexecuted_blocks=1 00:30:23.013 00:30:23.013 ' 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:23.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.013 --rc genhtml_branch_coverage=1 00:30:23.013 --rc genhtml_function_coverage=1 00:30:23.013 --rc genhtml_legend=1 00:30:23.013 --rc geninfo_all_blocks=1 00:30:23.013 --rc geninfo_unexecuted_blocks=1 00:30:23.013 00:30:23.013 ' 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:23.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.013 --rc genhtml_branch_coverage=1 00:30:23.013 --rc genhtml_function_coverage=1 00:30:23.013 --rc genhtml_legend=1 00:30:23.013 --rc geninfo_all_blocks=1 00:30:23.013 --rc geninfo_unexecuted_blocks=1 00:30:23.013 00:30:23.013 ' 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:23.013 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:23.014 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:23.014 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:23.014 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:23.014 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:23.014 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:23.014 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:23.014 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:23.014 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:23.014 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:23.014 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.014 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.014 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.014 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:23.014 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:23.014 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:23.014 14:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:31.160 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:31.161 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:31.161 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:31.161 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:31.161 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:31.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:31.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:30:31.161 00:30:31.161 --- 10.0.0.2 ping statistics --- 00:30:31.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.161 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:31.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:31.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:30:31.161 00:30:31.161 --- 10.0.0.1 ping statistics --- 00:30:31.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.161 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1233624 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1233624 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1233624 ']' 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:31.161 14:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:31.162 [2024-10-30 14:17:28.543437] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:31.162 [2024-10-30 14:17:28.544576] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:30:31.162 [2024-10-30 14:17:28.544628] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:31.162 [2024-10-30 14:17:28.643283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:31.162 [2024-10-30 14:17:28.694722] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:31.162 [2024-10-30 14:17:28.694779] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:31.162 [2024-10-30 14:17:28.694788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:31.162 [2024-10-30 14:17:28.694795] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:31.162 [2024-10-30 14:17:28.694801] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:31.162 [2024-10-30 14:17:28.696572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:31.162 [2024-10-30 14:17:28.696576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.162 [2024-10-30 14:17:28.773270] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:31.162 [2024-10-30 14:17:28.773802] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:31.162 [2024-10-30 14:17:28.774130] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:31.162 [2024-10-30 14:17:29.405703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:31.162 [2024-10-30 14:17:29.438164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:31.162 NULL1 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.162 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:31.423 Delay0 00:30:31.423 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.423 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.423 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.423 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:31.423 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.423 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1233811 00:30:31.423 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:31.423 14:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:31.423 [2024-10-30 14:17:29.560387] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:33.340 14:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:33.340 14:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.340 14:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:33.603 Read completed with error (sct=0, sc=8) 00:30:33.603 Read completed with error (sct=0, sc=8) 00:30:33.603 starting I/O failed: -6 00:30:33.603 Write completed with error (sct=0, sc=8) 00:30:33.603 Write completed with error (sct=0, sc=8) 00:30:33.603 Read completed with error (sct=0, sc=8) 00:30:33.603 Write completed with error (sct=0, sc=8) 00:30:33.603 starting I/O failed: -6 00:30:33.603 Write completed with error (sct=0, sc=8) 00:30:33.603 Read completed with error (sct=0, sc=8) 00:30:33.603 Read completed with error (sct=0, sc=8) 00:30:33.603 Read completed with error (sct=0, sc=8) 00:30:33.603 starting I/O failed: -6 00:30:33.603 Write completed with error (sct=0, sc=8) 00:30:33.603 Read completed with error (sct=0, sc=8) 00:30:33.603 Write completed with error (sct=0, sc=8) 00:30:33.603 Write completed with error (sct=0, sc=8) 00:30:33.603 starting I/O failed: -6 00:30:33.603 Write completed with error (sct=0, sc=8) 00:30:33.603 Write completed with error (sct=0, sc=8) 00:30:33.603 Read completed with error (sct=0, sc=8) 00:30:33.603 Write completed with error (sct=0, sc=8) 00:30:33.603 starting I/O failed: -6 00:30:33.603 Write completed with error (sct=0, sc=8) 00:30:33.603 Write completed with error (sct=0, sc=8) 00:30:33.603 Write completed with error (sct=0, sc=8) 00:30:33.603 Read completed with error (sct=0, sc=8) 00:30:33.603 starting I/O failed: -6 00:30:33.603 Read completed with error (sct=0, sc=8) 00:30:33.603 Read completed with error (sct=0, sc=8) 00:30:33.603 Read completed with error (sct=0, sc=8) 00:30:33.603 Read completed with error (sct=0, sc=8) 00:30:33.603 starting I/O failed: -6 00:30:33.603 Read completed with error (sct=0, sc=8) 00:30:33.603 Read completed with error (sct=0, sc=8) 00:30:33.603 Write completed with error (sct=0, sc=8) 00:30:33.603 Read completed with error (sct=0, sc=8) 00:30:33.603 starting I/O failed: -6 00:30:33.603 Read completed with error (sct=0, sc=8) 00:30:33.603 Read completed with error (sct=0, sc=8) 00:30:33.603 Read completed with error (sct=0, sc=8) 00:30:33.603 Write completed with error (sct=0, sc=8) 00:30:33.603 starting I/O failed: -6 00:30:33.603 Read completed with error (sct=0, sc=8) 00:30:33.603 Write completed with error (sct=0, sc=8) 00:30:33.603 Write completed with error (sct=0, sc=8) 00:30:33.603 Write completed with error (sct=0, sc=8) 00:30:33.603 starting I/O failed: -6 00:30:33.603 Read completed with error (sct=0, sc=8) 00:30:33.603 Read completed with error (sct=0, sc=8) 00:30:33.603 Read completed with error (sct=0, sc=8) 00:30:33.603 Write completed with error (sct=0, sc=8) 00:30:33.604 starting I/O failed: -6 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 starting I/O failed: -6 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 [2024-10-30 14:17:31.727881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2410 is same with the state(6) to be set 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 starting I/O failed: -6 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 starting I/O failed: -6 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 starting I/O failed: -6 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 starting I/O failed: -6 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 starting I/O failed: -6 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 starting I/O failed: -6 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 starting I/O failed: -6 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 starting I/O failed: -6 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 starting I/O failed: -6 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 [2024-10-30 14:17:31.730479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f31a8000c00 is same with the state(6) to be set 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Write completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:33.604 Read completed with error (sct=0, sc=8) 00:30:34.548 [2024-10-30 14:17:32.701003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a3af0 is same with the state(6) to be set 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 [2024-10-30 14:17:32.731346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a25f0 is same with the state(6) to be set 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 [2024-10-30 14:17:32.732619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f31a800d780 is same with the state(6) to be set 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 [2024-10-30 14:17:32.732722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f31a800cfe0 is same with the state(6) to be set 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Read completed with error (sct=0, sc=8) 00:30:34.548 Write completed with error (sct=0, sc=8) 00:30:34.548 [2024-10-30 14:17:32.732826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2b00 is same with the state(6) to be set 00:30:34.548 Initializing NVMe Controllers 00:30:34.548 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:34.548 Controller IO queue size 128, less than required. 00:30:34.548 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:34.548 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:34.548 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:34.548 Initialization complete. Launching workers. 00:30:34.548 ======================================================== 00:30:34.548 Latency(us) 00:30:34.548 Device Information : IOPS MiB/s Average min max 00:30:34.548 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.41 0.09 880752.10 447.82 1010853.72 00:30:34.548 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.55 0.08 931881.45 342.64 1012186.90 00:30:34.548 ======================================================== 00:30:34.548 Total : 330.96 0.16 904627.82 342.64 1012186.90 00:30:34.548 00:30:34.548 [2024-10-30 14:17:32.733637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a3af0 (9): Bad file descriptor 00:30:34.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:34.548 14:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.548 14:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:30:34.548 14:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1233811 00:30:34.548 14:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1233811 00:30:35.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1233811) - No such process 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1233811 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1233811 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1233811 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:35.122 [2024-10-30 14:17:33.265971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1234485 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1234485 00:30:35.122 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:35.122 [2024-10-30 14:17:33.367934] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:35.698 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:35.698 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1234485 00:30:35.698 14:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:36.271 14:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:36.271 14:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1234485 00:30:36.271 14:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:36.533 14:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:36.533 14:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1234485 00:30:36.533 14:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:37.105 14:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:37.105 14:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1234485 00:30:37.105 14:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:37.678 14:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:37.678 14:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1234485 00:30:37.678 14:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:38.250 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:38.250 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1234485 00:30:38.250 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:38.250 Initializing NVMe Controllers 00:30:38.250 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:38.250 Controller IO queue size 128, less than required. 00:30:38.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:38.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:38.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:38.250 Initialization complete. Launching workers. 00:30:38.250 ======================================================== 00:30:38.250 Latency(us) 00:30:38.250 Device Information : IOPS MiB/s Average min max 00:30:38.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002188.95 1000220.16 1005244.76 00:30:38.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004106.83 1000318.10 1010949.66 00:30:38.250 ======================================================== 00:30:38.250 Total : 256.00 0.12 1003147.89 1000220.16 1010949.66 00:30:38.250 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1234485 00:30:38.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1234485) - No such process 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1234485 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:38.823 rmmod nvme_tcp 00:30:38.823 rmmod nvme_fabrics 00:30:38.823 rmmod nvme_keyring 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1233624 ']' 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1233624 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1233624 ']' 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1233624 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1233624 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1233624' 00:30:38.823 killing process with pid 1233624 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1233624 00:30:38.823 14:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1233624 00:30:38.823 14:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:38.823 14:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:38.823 14:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:38.823 14:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:38.823 14:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:30:38.823 14:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:38.823 14:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:30:38.823 14:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:38.823 14:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:38.823 14:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.823 14:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.823 14:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.373 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:41.373 00:30:41.373 real 0m18.301s 00:30:41.373 user 0m26.521s 00:30:41.373 sys 0m7.531s 00:30:41.373 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:41.373 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:41.373 ************************************ 00:30:41.373 END TEST nvmf_delete_subsystem 00:30:41.373 ************************************ 00:30:41.373 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:41.374 ************************************ 00:30:41.374 START TEST nvmf_host_management 00:30:41.374 ************************************ 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:41.374 * Looking for test storage... 00:30:41.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:41.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.374 --rc genhtml_branch_coverage=1 00:30:41.374 --rc genhtml_function_coverage=1 00:30:41.374 --rc genhtml_legend=1 00:30:41.374 --rc geninfo_all_blocks=1 00:30:41.374 --rc geninfo_unexecuted_blocks=1 00:30:41.374 00:30:41.374 ' 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:41.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.374 --rc genhtml_branch_coverage=1 00:30:41.374 --rc genhtml_function_coverage=1 00:30:41.374 --rc genhtml_legend=1 00:30:41.374 --rc geninfo_all_blocks=1 00:30:41.374 --rc geninfo_unexecuted_blocks=1 00:30:41.374 00:30:41.374 ' 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:41.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.374 --rc genhtml_branch_coverage=1 00:30:41.374 --rc genhtml_function_coverage=1 00:30:41.374 --rc genhtml_legend=1 00:30:41.374 --rc geninfo_all_blocks=1 00:30:41.374 --rc geninfo_unexecuted_blocks=1 00:30:41.374 00:30:41.374 ' 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:41.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.374 --rc genhtml_branch_coverage=1 00:30:41.374 --rc genhtml_function_coverage=1 00:30:41.374 --rc genhtml_legend=1 00:30:41.374 --rc geninfo_all_blocks=1 00:30:41.374 --rc geninfo_unexecuted_blocks=1 00:30:41.374 00:30:41.374 ' 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.374 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:30:41.375 14:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:49.525 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:49.525 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:49.525 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:49.525 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:49.525 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:49.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:49.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:30:49.526 00:30:49.526 --- 10.0.0.2 ping statistics --- 00:30:49.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.526 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:49.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:49.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:30:49.526 00:30:49.526 --- 10.0.0.1 ping statistics --- 00:30:49.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.526 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1239458 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1239458 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1239458 ']' 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:49.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:49.526 14:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:49.526 [2024-10-30 14:17:47.005203] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:49.526 [2024-10-30 14:17:47.006359] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:30:49.526 [2024-10-30 14:17:47.006415] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:49.526 [2024-10-30 14:17:47.107430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:49.526 [2024-10-30 14:17:47.160357] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:49.526 [2024-10-30 14:17:47.160409] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:49.526 [2024-10-30 14:17:47.160418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:49.526 [2024-10-30 14:17:47.160426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:49.526 [2024-10-30 14:17:47.160432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:49.526 [2024-10-30 14:17:47.162819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:49.526 [2024-10-30 14:17:47.162982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:49.526 [2024-10-30 14:17:47.163118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.526 [2024-10-30 14:17:47.163119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:49.526 [2024-10-30 14:17:47.239715] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:49.526 [2024-10-30 14:17:47.240795] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:49.526 [2024-10-30 14:17:47.241007] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:49.526 [2024-10-30 14:17:47.241470] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:49.526 [2024-10-30 14:17:47.241524] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:49.526 14:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:49.526 14:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:49.526 14:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:49.526 14:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:49.526 14:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:49.787 14:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.787 14:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:49.787 14:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.787 14:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:49.787 [2024-10-30 14:17:47.871973] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.787 14:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.787 14:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:49.787 14:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:49.787 14:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:49.787 14:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:49.787 14:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:49.787 14:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:49.787 14:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.787 14:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:49.787 Malloc0 00:30:49.787 [2024-10-30 14:17:47.976149] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.787 14:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.788 14:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:49.788 14:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:49.788 14:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:49.788 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1239536 00:30:49.788 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1239536 /var/tmp/bdevperf.sock 00:30:49.788 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1239536 ']' 00:30:49.788 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:49.788 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:49.788 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:49.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:49.788 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:49.788 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:49.788 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:49.788 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:49.788 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:49.788 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:49.788 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:49.788 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:49.788 { 00:30:49.788 "params": { 00:30:49.788 "name": "Nvme$subsystem", 00:30:49.788 "trtype": "$TEST_TRANSPORT", 00:30:49.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:49.788 "adrfam": "ipv4", 00:30:49.788 "trsvcid": "$NVMF_PORT", 00:30:49.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:49.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:49.788 "hdgst": ${hdgst:-false}, 00:30:49.788 "ddgst": ${ddgst:-false} 00:30:49.788 }, 00:30:49.788 "method": "bdev_nvme_attach_controller" 00:30:49.788 } 00:30:49.788 EOF 00:30:49.788 )") 00:30:49.788 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:49.788 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:49.788 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:49.788 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:49.788 "params": { 00:30:49.788 "name": "Nvme0", 00:30:49.788 "trtype": "tcp", 00:30:49.788 "traddr": "10.0.0.2", 00:30:49.788 "adrfam": "ipv4", 00:30:49.788 "trsvcid": "4420", 00:30:49.788 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:49.788 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:49.788 "hdgst": false, 00:30:49.788 "ddgst": false 00:30:49.788 }, 00:30:49.788 "method": "bdev_nvme_attach_controller" 00:30:49.788 }' 00:30:49.788 [2024-10-30 14:17:48.086777] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:30:50.049 [2024-10-30 14:17:48.086855] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1239536 ] 00:30:50.049 [2024-10-30 14:17:48.180887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.049 [2024-10-30 14:17:48.234958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.311 Running I/O for 10 seconds... 00:30:50.885 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:50.885 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:50.885 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:50.885 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.885 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:50.885 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.885 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:50.885 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:50.885 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:50.885 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:50.885 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:50.885 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:50.886 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:50.886 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:50.886 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:50.886 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:50.886 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.886 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:50.886 14:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.886 14:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=650 00:30:50.886 14:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 650 -ge 100 ']' 00:30:50.886 14:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:50.886 14:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:50.886 14:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:50.886 14:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:50.886 14:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.886 14:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:50.886 [2024-10-30 14:17:49.007777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.007849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.007859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.007867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.007874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.007882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.007889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.007896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.007903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.007910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.007917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.007924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.007931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.007940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.007948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.007954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.007962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.007970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.007987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.007994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17664a0 is same with the state(6) to be set 00:30:50.886 [2024-10-30 14:17:49.008531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.886 [2024-10-30 14:17:49.008592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.886 [2024-10-30 14:17:49.008615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.008624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.008634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.008642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.008652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.008660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.008670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.008677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.008690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.008698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.008709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.008717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.008727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.008744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.008762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.008770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.008780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.008787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.008797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.008805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.008815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.008822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.008834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.008842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.008852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.008859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.008871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.008879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.008889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.008897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.008907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.008915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.008924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.008933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.008943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.008951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.008961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.008968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.008980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.008987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.008999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.009007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.009017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.009025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.009035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.009042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.009051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.009060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.009070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.009077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.009088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.009096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.009107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.009115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.009126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.009133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.009143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.009150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.009160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.009168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.009178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.009185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.009196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.009206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.009216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.887 [2024-10-30 14:17:49.009224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.887 [2024-10-30 14:17:49.009234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-30 14:17:49.009759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c87470 is same with the state(6) to be set 00:30:50.888 [2024-10-30 14:17:49.009897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.888 [2024-10-30 14:17:49.009912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.888 [2024-10-30 14:17:49.009930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.888 [2024-10-30 14:17:49.009939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.888 [2024-10-30 14:17:49.009947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.889 [2024-10-30 14:17:49.009956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.889 [2024-10-30 14:17:49.009964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.889 [2024-10-30 14:17:49.009971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6e200 is same with the state(6) to be set 00:30:50.889 [2024-10-30 14:17:49.011204] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:50.889 14:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.889 task offset: 98304 on job bdev=Nvme0n1 fails 00:30:50.889 00:30:50.889 Latency(us) 00:30:50.889 [2024-10-30T13:17:49.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.889 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:50.889 Job: Nvme0n1 ended in about 0.53 seconds with error 00:30:50.889 Verification LBA range: start 0x0 length 0x400 00:30:50.889 Nvme0n1 : 0.53 1345.53 84.10 121.12 0.00 42524.00 3003.73 39103.15 00:30:50.889 [2024-10-30T13:17:49.188Z] =================================================================================================================== 00:30:50.889 [2024-10-30T13:17:49.188Z] Total : 1345.53 84.10 121.12 0.00 42524.00 3003.73 39103.15 00:30:50.889 14:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:50.889 [2024-10-30 14:17:49.013444] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:50.889 [2024-10-30 14:17:49.013485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6e200 (9): Bad file descriptor 00:30:50.889 14:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.889 14:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:50.889 [2024-10-30 14:17:49.015111] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:30:50.889 [2024-10-30 14:17:49.015198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:50.889 [2024-10-30 14:17:49.015228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.889 [2024-10-30 14:17:49.015243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:30:50.889 [2024-10-30 14:17:49.015251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:30:50.889 [2024-10-30 14:17:49.015260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.889 [2024-10-30 14:17:49.015268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6e200 00:30:50.889 [2024-10-30 14:17:49.015292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6e200 (9): Bad file descriptor 00:30:50.889 [2024-10-30 14:17:49.015305] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:50.889 [2024-10-30 14:17:49.015313] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:50.889 [2024-10-30 14:17:49.015324] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:50.889 [2024-10-30 14:17:49.015341] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:50.889 14:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.889 14:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:51.836 14:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1239536 00:30:51.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1239536) - No such process 00:30:51.836 14:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:51.836 14:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:51.836 14:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:51.836 14:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:51.836 14:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:51.836 14:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:51.836 14:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:51.836 14:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:51.836 { 00:30:51.836 "params": { 00:30:51.836 "name": "Nvme$subsystem", 00:30:51.836 "trtype": "$TEST_TRANSPORT", 00:30:51.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:51.836 "adrfam": "ipv4", 00:30:51.836 "trsvcid": "$NVMF_PORT", 00:30:51.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:51.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:51.836 "hdgst": ${hdgst:-false}, 00:30:51.836 "ddgst": ${ddgst:-false} 00:30:51.836 }, 00:30:51.836 "method": "bdev_nvme_attach_controller" 00:30:51.836 } 00:30:51.836 EOF 00:30:51.836 )") 00:30:51.836 14:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:51.836 14:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:51.836 14:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:51.836 14:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:51.836 "params": { 00:30:51.836 "name": "Nvme0", 00:30:51.836 "trtype": "tcp", 00:30:51.836 "traddr": "10.0.0.2", 00:30:51.836 "adrfam": "ipv4", 00:30:51.836 "trsvcid": "4420", 00:30:51.836 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:51.836 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:51.836 "hdgst": false, 00:30:51.836 "ddgst": false 00:30:51.836 }, 00:30:51.836 "method": "bdev_nvme_attach_controller" 00:30:51.836 }' 00:30:51.836 [2024-10-30 14:17:50.090004] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:30:51.836 [2024-10-30 14:17:50.090092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1239943 ] 00:30:52.098 [2024-10-30 14:17:50.200991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.098 [2024-10-30 14:17:50.253809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.358 Running I/O for 1 seconds... 00:30:53.302 1676.00 IOPS, 104.75 MiB/s 00:30:53.302 Latency(us) 00:30:53.302 [2024-10-30T13:17:51.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.303 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:53.303 Verification LBA range: start 0x0 length 0x400 00:30:53.303 Nvme0n1 : 1.01 1715.39 107.21 0.00 0.00 36615.28 1228.80 38666.24 00:30:53.303 [2024-10-30T13:17:51.602Z] =================================================================================================================== 00:30:53.303 [2024-10-30T13:17:51.602Z] Total : 1715.39 107.21 0.00 0.00 36615.28 1228.80 38666.24 00:30:53.303 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:53.303 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:53.303 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:53.303 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:53.303 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:53.303 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:53.303 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:53.303 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:53.303 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:53.303 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:53.303 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:53.303 rmmod nvme_tcp 00:30:53.303 rmmod nvme_fabrics 00:30:53.566 rmmod nvme_keyring 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1239458 ']' 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1239458 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1239458 ']' 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1239458 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1239458 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1239458' 00:30:53.566 killing process with pid 1239458 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1239458 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1239458 00:30:53.566 [2024-10-30 14:17:51.811832] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:53.566 14:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.118 14:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:56.118 14:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:56.118 00:30:56.118 real 0m14.706s 00:30:56.118 user 0m19.460s 00:30:56.118 sys 0m7.456s 00:30:56.118 14:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:56.118 14:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:56.118 ************************************ 00:30:56.118 END TEST nvmf_host_management 00:30:56.118 ************************************ 00:30:56.118 14:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:56.118 14:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:56.118 14:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:56.118 14:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:56.118 ************************************ 00:30:56.118 START TEST nvmf_lvol 00:30:56.118 ************************************ 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:56.118 * Looking for test storage... 00:30:56.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:56.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.118 --rc genhtml_branch_coverage=1 00:30:56.118 --rc genhtml_function_coverage=1 00:30:56.118 --rc genhtml_legend=1 00:30:56.118 --rc geninfo_all_blocks=1 00:30:56.118 --rc geninfo_unexecuted_blocks=1 00:30:56.118 00:30:56.118 ' 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:56.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.118 --rc genhtml_branch_coverage=1 00:30:56.118 --rc genhtml_function_coverage=1 00:30:56.118 --rc genhtml_legend=1 00:30:56.118 --rc geninfo_all_blocks=1 00:30:56.118 --rc geninfo_unexecuted_blocks=1 00:30:56.118 00:30:56.118 ' 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:56.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.118 --rc genhtml_branch_coverage=1 00:30:56.118 --rc genhtml_function_coverage=1 00:30:56.118 --rc genhtml_legend=1 00:30:56.118 --rc geninfo_all_blocks=1 00:30:56.118 --rc geninfo_unexecuted_blocks=1 00:30:56.118 00:30:56.118 ' 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:56.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.118 --rc genhtml_branch_coverage=1 00:30:56.118 --rc genhtml_function_coverage=1 00:30:56.118 --rc genhtml_legend=1 00:30:56.118 --rc geninfo_all_blocks=1 00:30:56.118 --rc geninfo_unexecuted_blocks=1 00:30:56.118 00:30:56.118 ' 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.118 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:56.119 14:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:04.287 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:04.287 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:04.287 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:04.287 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:04.287 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:04.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:04.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:31:04.288 00:31:04.288 --- 10.0.0.2 ping statistics --- 00:31:04.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.288 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:04.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:04.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:31:04.288 00:31:04.288 --- 10.0.0.1 ping statistics --- 00:31:04.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.288 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1244635 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1244635 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1244635 ']' 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:04.288 14:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:04.288 [2024-10-30 14:18:01.771104] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:04.288 [2024-10-30 14:18:01.772232] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:31:04.288 [2024-10-30 14:18:01.772295] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:04.288 [2024-10-30 14:18:01.872379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:04.288 [2024-10-30 14:18:01.924320] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:04.288 [2024-10-30 14:18:01.924376] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:04.288 [2024-10-30 14:18:01.924385] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:04.288 [2024-10-30 14:18:01.924393] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:04.288 [2024-10-30 14:18:01.924401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:04.288 [2024-10-30 14:18:01.926189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:04.288 [2024-10-30 14:18:01.926353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.288 [2024-10-30 14:18:01.926354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:04.288 [2024-10-30 14:18:02.002438] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:04.288 [2024-10-30 14:18:02.003507] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:04.288 [2024-10-30 14:18:02.004142] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:04.288 [2024-10-30 14:18:02.004264] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:04.288 14:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:04.288 14:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:31:04.288 14:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:04.288 14:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:04.288 14:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:04.551 14:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:04.551 14:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:04.551 [2024-10-30 14:18:02.787249] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:04.551 14:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:04.812 14:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:04.812 14:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:05.073 14:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:05.073 14:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:05.333 14:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:05.602 14:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=538b9b34-e273-4df9-8f25-037222aead66 00:31:05.602 14:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 538b9b34-e273-4df9-8f25-037222aead66 lvol 20 00:31:05.602 14:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=6477eb66-7e8e-4049-b452-90fab9806010 00:31:05.602 14:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:05.887 14:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6477eb66-7e8e-4049-b452-90fab9806010 00:31:06.221 14:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:06.221 [2024-10-30 14:18:04.363180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.221 14:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:06.514 14:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1245047 00:31:06.514 14:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:06.514 14:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:07.470 14:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 6477eb66-7e8e-4049-b452-90fab9806010 MY_SNAPSHOT 00:31:07.731 14:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9e06857c-85e9-4afa-9acf-78b5ad91556c 00:31:07.731 14:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 6477eb66-7e8e-4049-b452-90fab9806010 30 00:31:07.992 14:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9e06857c-85e9-4afa-9acf-78b5ad91556c MY_CLONE 00:31:07.992 14:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d64b8e06-32a8-4c2a-9eec-b81d67023883 00:31:07.992 14:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d64b8e06-32a8-4c2a-9eec-b81d67023883 00:31:08.564 14:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1245047 00:31:16.695 Initializing NVMe Controllers 00:31:16.695 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:16.695 Controller IO queue size 128, less than required. 00:31:16.695 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:16.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:16.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:16.695 Initialization complete. Launching workers. 00:31:16.695 ======================================================== 00:31:16.695 Latency(us) 00:31:16.695 Device Information : IOPS MiB/s Average min max 00:31:16.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15324.00 59.86 8354.15 1382.07 83805.24 00:31:16.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15050.70 58.79 8507.92 2898.36 77558.71 00:31:16.695 ======================================================== 00:31:16.695 Total : 30374.70 118.65 8430.34 1382.07 83805.24 00:31:16.695 00:31:16.696 14:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:16.956 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6477eb66-7e8e-4049-b452-90fab9806010 00:31:16.956 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 538b9b34-e273-4df9-8f25-037222aead66 00:31:17.216 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:17.216 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:17.216 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:17.216 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:17.216 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:17.216 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:17.216 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:17.216 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:17.216 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:17.216 rmmod nvme_tcp 00:31:17.216 rmmod nvme_fabrics 00:31:17.216 rmmod nvme_keyring 00:31:17.216 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:17.216 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:17.216 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:17.216 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1244635 ']' 00:31:17.216 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1244635 00:31:17.216 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1244635 ']' 00:31:17.216 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1244635 00:31:17.216 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:31:17.216 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:17.216 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1244635 00:31:17.476 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:17.476 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:17.476 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1244635' 00:31:17.476 killing process with pid 1244635 00:31:17.476 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1244635 00:31:17.476 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1244635 00:31:17.476 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:17.476 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:17.476 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:17.476 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:17.476 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:31:17.476 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:17.476 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:31:17.476 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:17.476 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:17.476 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.476 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:17.476 14:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.021 14:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:20.021 00:31:20.021 real 0m23.737s 00:31:20.021 user 0m55.427s 00:31:20.021 sys 0m10.882s 00:31:20.021 14:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:20.021 14:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:20.021 ************************************ 00:31:20.021 END TEST nvmf_lvol 00:31:20.021 ************************************ 00:31:20.021 14:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:20.021 14:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:20.021 14:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:20.021 14:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:20.021 ************************************ 00:31:20.021 START TEST nvmf_lvs_grow 00:31:20.021 ************************************ 00:31:20.021 14:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:20.021 * Looking for test storage... 00:31:20.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:20.021 14:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:20.021 14:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:31:20.021 14:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:20.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.021 --rc genhtml_branch_coverage=1 00:31:20.021 --rc genhtml_function_coverage=1 00:31:20.021 --rc genhtml_legend=1 00:31:20.021 --rc geninfo_all_blocks=1 00:31:20.021 --rc geninfo_unexecuted_blocks=1 00:31:20.021 00:31:20.021 ' 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:20.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.021 --rc genhtml_branch_coverage=1 00:31:20.021 --rc genhtml_function_coverage=1 00:31:20.021 --rc genhtml_legend=1 00:31:20.021 --rc geninfo_all_blocks=1 00:31:20.021 --rc geninfo_unexecuted_blocks=1 00:31:20.021 00:31:20.021 ' 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:20.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.021 --rc genhtml_branch_coverage=1 00:31:20.021 --rc genhtml_function_coverage=1 00:31:20.021 --rc genhtml_legend=1 00:31:20.021 --rc geninfo_all_blocks=1 00:31:20.021 --rc geninfo_unexecuted_blocks=1 00:31:20.021 00:31:20.021 ' 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:20.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.021 --rc genhtml_branch_coverage=1 00:31:20.021 --rc genhtml_function_coverage=1 00:31:20.021 --rc genhtml_legend=1 00:31:20.021 --rc geninfo_all_blocks=1 00:31:20.021 --rc geninfo_unexecuted_blocks=1 00:31:20.021 00:31:20.021 ' 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:20.021 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:31:20.022 14:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:28.167 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:28.167 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:28.167 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:28.167 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:28.167 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:28.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:28.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:31:28.168 00:31:28.168 --- 10.0.0.2 ping statistics --- 00:31:28.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.168 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:28.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:28.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:31:28.168 00:31:28.168 --- 10.0.0.1 ping statistics --- 00:31:28.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.168 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1251828 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1251828 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1251828 ']' 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:28.168 14:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:28.168 [2024-10-30 14:18:25.436407] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:28.168 [2024-10-30 14:18:25.437548] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:31:28.168 [2024-10-30 14:18:25.437601] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:28.168 [2024-10-30 14:18:25.536076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.168 [2024-10-30 14:18:25.586005] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:28.168 [2024-10-30 14:18:25.586058] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:28.168 [2024-10-30 14:18:25.586067] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:28.168 [2024-10-30 14:18:25.586074] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:28.168 [2024-10-30 14:18:25.586080] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:28.168 [2024-10-30 14:18:25.586826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.168 [2024-10-30 14:18:25.662019] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:28.168 [2024-10-30 14:18:25.662307] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:28.168 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:28.168 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:31:28.168 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:28.168 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:28.168 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:28.168 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:28.168 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:28.168 [2024-10-30 14:18:26.455696] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:28.429 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:31:28.429 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:28.429 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:28.429 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:28.429 ************************************ 00:31:28.429 START TEST lvs_grow_clean 00:31:28.429 ************************************ 00:31:28.429 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:31:28.429 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:28.429 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:28.429 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:28.429 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:28.429 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:28.429 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:28.429 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:28.429 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:28.429 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:28.690 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:28.691 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:28.691 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c8d8c85a-2bcf-4166-b60d-c7530c7252c1 00:31:28.691 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8d8c85a-2bcf-4166-b60d-c7530c7252c1 00:31:28.691 14:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:28.951 14:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:28.951 14:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:28.951 14:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c8d8c85a-2bcf-4166-b60d-c7530c7252c1 lvol 150 00:31:29.212 14:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=25ca4517-2394-42ab-bc23-d9db55aa90c9 00:31:29.212 14:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:29.212 14:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:29.212 [2024-10-30 14:18:27.491369] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:29.212 [2024-10-30 14:18:27.491532] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:29.212 true 00:31:29.473 14:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8d8c85a-2bcf-4166-b60d-c7530c7252c1 00:31:29.473 14:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:29.473 14:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:29.473 14:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:29.734 14:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 25ca4517-2394-42ab-bc23-d9db55aa90c9 00:31:29.998 14:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:29.998 [2024-10-30 14:18:28.244025] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:29.998 14:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:30.259 14:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1252406 00:31:30.259 14:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:30.259 14:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:30.259 14:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1252406 /var/tmp/bdevperf.sock 00:31:30.259 14:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1252406 ']' 00:31:30.259 14:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:30.259 14:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:30.259 14:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:30.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:30.260 14:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:30.260 14:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:30.260 [2024-10-30 14:18:28.500795] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:31:30.260 [2024-10-30 14:18:28.500869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1252406 ] 00:31:30.521 [2024-10-30 14:18:28.594394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.521 [2024-10-30 14:18:28.648196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:31.093 14:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:31.094 14:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:31:31.094 14:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:31.354 Nvme0n1 00:31:31.354 14:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:31.614 [ 00:31:31.614 { 00:31:31.614 "name": "Nvme0n1", 00:31:31.614 "aliases": [ 00:31:31.614 "25ca4517-2394-42ab-bc23-d9db55aa90c9" 00:31:31.614 ], 00:31:31.614 "product_name": "NVMe disk", 00:31:31.614 "block_size": 4096, 00:31:31.614 "num_blocks": 38912, 00:31:31.614 "uuid": "25ca4517-2394-42ab-bc23-d9db55aa90c9", 00:31:31.614 "numa_id": 0, 00:31:31.614 "assigned_rate_limits": { 00:31:31.614 "rw_ios_per_sec": 0, 00:31:31.614 "rw_mbytes_per_sec": 0, 00:31:31.614 "r_mbytes_per_sec": 0, 00:31:31.614 "w_mbytes_per_sec": 0 00:31:31.614 }, 00:31:31.614 "claimed": false, 00:31:31.614 "zoned": false, 00:31:31.614 "supported_io_types": { 00:31:31.614 "read": true, 00:31:31.614 "write": true, 00:31:31.614 "unmap": true, 00:31:31.614 "flush": true, 00:31:31.614 "reset": true, 00:31:31.614 "nvme_admin": true, 00:31:31.614 "nvme_io": true, 00:31:31.614 "nvme_io_md": false, 00:31:31.614 "write_zeroes": true, 00:31:31.614 "zcopy": false, 00:31:31.614 "get_zone_info": false, 00:31:31.614 "zone_management": false, 00:31:31.614 "zone_append": false, 00:31:31.614 "compare": true, 00:31:31.614 "compare_and_write": true, 00:31:31.614 "abort": true, 00:31:31.614 "seek_hole": false, 00:31:31.614 "seek_data": false, 00:31:31.614 "copy": true, 00:31:31.614 "nvme_iov_md": false 00:31:31.614 }, 00:31:31.614 "memory_domains": [ 00:31:31.614 { 00:31:31.614 "dma_device_id": "system", 00:31:31.614 "dma_device_type": 1 00:31:31.614 } 00:31:31.614 ], 00:31:31.614 "driver_specific": { 00:31:31.614 "nvme": [ 00:31:31.614 { 00:31:31.614 "trid": { 00:31:31.614 "trtype": "TCP", 00:31:31.614 "adrfam": "IPv4", 00:31:31.614 "traddr": "10.0.0.2", 00:31:31.614 "trsvcid": "4420", 00:31:31.614 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:31.614 }, 00:31:31.614 "ctrlr_data": { 00:31:31.614 "cntlid": 1, 00:31:31.614 "vendor_id": "0x8086", 00:31:31.614 "model_number": "SPDK bdev Controller", 00:31:31.614 "serial_number": "SPDK0", 00:31:31.614 "firmware_revision": "25.01", 00:31:31.614 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:31.614 "oacs": { 00:31:31.614 "security": 0, 00:31:31.614 "format": 0, 00:31:31.614 "firmware": 0, 00:31:31.614 "ns_manage": 0 00:31:31.614 }, 00:31:31.614 "multi_ctrlr": true, 00:31:31.614 "ana_reporting": false 00:31:31.614 }, 00:31:31.614 "vs": { 00:31:31.614 "nvme_version": "1.3" 00:31:31.614 }, 00:31:31.614 "ns_data": { 00:31:31.614 "id": 1, 00:31:31.615 "can_share": true 00:31:31.615 } 00:31:31.615 } 00:31:31.615 ], 00:31:31.615 "mp_policy": "active_passive" 00:31:31.615 } 00:31:31.615 } 00:31:31.615 ] 00:31:31.615 14:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1252556 00:31:31.615 14:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:31.615 14:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:31.615 Running I/O for 10 seconds... 00:31:32.557 Latency(us) 00:31:32.557 [2024-10-30T13:18:30.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:32.557 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:32.557 Nvme0n1 : 1.00 16778.00 65.54 0.00 0.00 0.00 0.00 0.00 00:31:32.557 [2024-10-30T13:18:30.856Z] =================================================================================================================== 00:31:32.557 [2024-10-30T13:18:30.856Z] Total : 16778.00 65.54 0.00 0.00 0.00 0.00 0.00 00:31:32.557 00:31:33.499 14:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c8d8c85a-2bcf-4166-b60d-c7530c7252c1 00:31:33.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:33.760 Nvme0n1 : 2.00 17005.50 66.43 0.00 0.00 0.00 0.00 0.00 00:31:33.760 [2024-10-30T13:18:32.059Z] =================================================================================================================== 00:31:33.760 [2024-10-30T13:18:32.059Z] Total : 17005.50 66.43 0.00 0.00 0.00 0.00 0.00 00:31:33.760 00:31:33.760 true 00:31:33.760 14:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8d8c85a-2bcf-4166-b60d-c7530c7252c1 00:31:33.760 14:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:34.021 14:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:34.021 14:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:34.021 14:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1252556 00:31:34.590 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:34.590 Nvme0n1 : 3.00 17173.33 67.08 0.00 0.00 0.00 0.00 0.00 00:31:34.590 [2024-10-30T13:18:32.889Z] =================================================================================================================== 00:31:34.590 [2024-10-30T13:18:32.889Z] Total : 17173.33 67.08 0.00 0.00 0.00 0.00 0.00 00:31:34.590 00:31:35.529 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:35.529 Nvme0n1 : 4.00 17382.25 67.90 0.00 0.00 0.00 0.00 0.00 00:31:35.529 [2024-10-30T13:18:33.828Z] =================================================================================================================== 00:31:35.529 [2024-10-30T13:18:33.828Z] Total : 17382.25 67.90 0.00 0.00 0.00 0.00 0.00 00:31:35.529 00:31:36.911 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:36.911 Nvme0n1 : 5.00 18921.60 73.91 0.00 0.00 0.00 0.00 0.00 00:31:36.911 [2024-10-30T13:18:35.210Z] =================================================================================================================== 00:31:36.911 [2024-10-30T13:18:35.210Z] Total : 18921.60 73.91 0.00 0.00 0.00 0.00 0.00 00:31:36.911 00:31:37.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:37.853 Nvme0n1 : 6.00 20002.83 78.14 0.00 0.00 0.00 0.00 0.00 00:31:37.853 [2024-10-30T13:18:36.152Z] =================================================================================================================== 00:31:37.853 [2024-10-30T13:18:36.152Z] Total : 20002.83 78.14 0.00 0.00 0.00 0.00 0.00 00:31:37.853 00:31:38.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:38.795 Nvme0n1 : 7.00 20784.00 81.19 0.00 0.00 0.00 0.00 0.00 00:31:38.795 [2024-10-30T13:18:37.094Z] =================================================================================================================== 00:31:38.795 [2024-10-30T13:18:37.094Z] Total : 20784.00 81.19 0.00 0.00 0.00 0.00 0.00 00:31:38.795 00:31:39.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:39.737 Nvme0n1 : 8.00 21362.00 83.45 0.00 0.00 0.00 0.00 0.00 00:31:39.737 [2024-10-30T13:18:38.036Z] =================================================================================================================== 00:31:39.737 [2024-10-30T13:18:38.036Z] Total : 21362.00 83.45 0.00 0.00 0.00 0.00 0.00 00:31:39.737 00:31:40.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:40.680 Nvme0n1 : 9.00 21825.78 85.26 0.00 0.00 0.00 0.00 0.00 00:31:40.681 [2024-10-30T13:18:38.980Z] =================================================================================================================== 00:31:40.681 [2024-10-30T13:18:38.980Z] Total : 21825.78 85.26 0.00 0.00 0.00 0.00 0.00 00:31:40.681 00:31:41.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:41.622 Nvme0n1 : 10.00 22190.40 86.68 0.00 0.00 0.00 0.00 0.00 00:31:41.622 [2024-10-30T13:18:39.921Z] =================================================================================================================== 00:31:41.622 [2024-10-30T13:18:39.921Z] Total : 22190.40 86.68 0.00 0.00 0.00 0.00 0.00 00:31:41.622 00:31:41.622 00:31:41.622 Latency(us) 00:31:41.622 [2024-10-30T13:18:39.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:41.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:41.622 Nvme0n1 : 10.00 22194.62 86.70 0.00 0.00 5764.04 3208.53 24029.87 00:31:41.622 [2024-10-30T13:18:39.921Z] =================================================================================================================== 00:31:41.622 [2024-10-30T13:18:39.921Z] Total : 22194.62 86.70 0.00 0.00 5764.04 3208.53 24029.87 00:31:41.622 { 00:31:41.622 "results": [ 00:31:41.622 { 00:31:41.622 "job": "Nvme0n1", 00:31:41.622 "core_mask": "0x2", 00:31:41.622 "workload": "randwrite", 00:31:41.622 "status": "finished", 00:31:41.622 "queue_depth": 128, 00:31:41.622 "io_size": 4096, 00:31:41.622 "runtime": 10.003866, 00:31:41.622 "iops": 22194.619560078074, 00:31:41.622 "mibps": 86.69773265655498, 00:31:41.622 "io_failed": 0, 00:31:41.622 "io_timeout": 0, 00:31:41.622 "avg_latency_us": 5764.04381734765, 00:31:41.622 "min_latency_us": 3208.5333333333333, 00:31:41.622 "max_latency_us": 24029.866666666665 00:31:41.622 } 00:31:41.622 ], 00:31:41.622 "core_count": 1 00:31:41.622 } 00:31:41.622 14:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1252406 00:31:41.622 14:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1252406 ']' 00:31:41.622 14:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1252406 00:31:41.622 14:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:31:41.622 14:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:41.623 14:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1252406 00:31:41.883 14:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:41.883 14:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:41.883 14:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1252406' 00:31:41.883 killing process with pid 1252406 00:31:41.883 14:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1252406 00:31:41.883 Received shutdown signal, test time was about 10.000000 seconds 00:31:41.883 00:31:41.883 Latency(us) 00:31:41.883 [2024-10-30T13:18:40.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:41.883 [2024-10-30T13:18:40.182Z] =================================================================================================================== 00:31:41.883 [2024-10-30T13:18:40.182Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:41.883 14:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1252406 00:31:41.883 14:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:42.144 14:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:42.144 14:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8d8c85a-2bcf-4166-b60d-c7530c7252c1 00:31:42.144 14:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:42.404 14:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:42.404 14:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:31:42.404 14:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:42.404 [2024-10-30 14:18:40.691434] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:42.666 14:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8d8c85a-2bcf-4166-b60d-c7530c7252c1 00:31:42.666 14:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:31:42.666 14:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8d8c85a-2bcf-4166-b60d-c7530c7252c1 00:31:42.666 14:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:42.666 14:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:42.666 14:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:42.666 14:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:42.666 14:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:42.666 14:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:42.666 14:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:42.666 14:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:42.666 14:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8d8c85a-2bcf-4166-b60d-c7530c7252c1 00:31:42.666 request: 00:31:42.666 { 00:31:42.666 "uuid": "c8d8c85a-2bcf-4166-b60d-c7530c7252c1", 00:31:42.666 "method": "bdev_lvol_get_lvstores", 00:31:42.666 "req_id": 1 00:31:42.666 } 00:31:42.666 Got JSON-RPC error response 00:31:42.666 response: 00:31:42.666 { 00:31:42.666 "code": -19, 00:31:42.666 "message": "No such device" 00:31:42.666 } 00:31:42.666 14:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:31:42.666 14:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:42.666 14:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:42.666 14:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:42.666 14:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:42.927 aio_bdev 00:31:42.927 14:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 25ca4517-2394-42ab-bc23-d9db55aa90c9 00:31:42.927 14:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=25ca4517-2394-42ab-bc23-d9db55aa90c9 00:31:42.927 14:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:42.927 14:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:31:42.927 14:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:42.927 14:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:42.927 14:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:43.187 14:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 25ca4517-2394-42ab-bc23-d9db55aa90c9 -t 2000 00:31:43.187 [ 00:31:43.187 { 00:31:43.187 "name": "25ca4517-2394-42ab-bc23-d9db55aa90c9", 00:31:43.187 "aliases": [ 00:31:43.187 "lvs/lvol" 00:31:43.187 ], 00:31:43.187 "product_name": "Logical Volume", 00:31:43.187 "block_size": 4096, 00:31:43.187 "num_blocks": 38912, 00:31:43.187 "uuid": "25ca4517-2394-42ab-bc23-d9db55aa90c9", 00:31:43.187 "assigned_rate_limits": { 00:31:43.187 "rw_ios_per_sec": 0, 00:31:43.187 "rw_mbytes_per_sec": 0, 00:31:43.187 "r_mbytes_per_sec": 0, 00:31:43.187 "w_mbytes_per_sec": 0 00:31:43.187 }, 00:31:43.187 "claimed": false, 00:31:43.187 "zoned": false, 00:31:43.187 "supported_io_types": { 00:31:43.187 "read": true, 00:31:43.187 "write": true, 00:31:43.187 "unmap": true, 00:31:43.187 "flush": false, 00:31:43.187 "reset": true, 00:31:43.187 "nvme_admin": false, 00:31:43.187 "nvme_io": false, 00:31:43.187 "nvme_io_md": false, 00:31:43.187 "write_zeroes": true, 00:31:43.187 "zcopy": false, 00:31:43.187 "get_zone_info": false, 00:31:43.187 "zone_management": false, 00:31:43.187 "zone_append": false, 00:31:43.187 "compare": false, 00:31:43.187 "compare_and_write": false, 00:31:43.187 "abort": false, 00:31:43.187 "seek_hole": true, 00:31:43.187 "seek_data": true, 00:31:43.187 "copy": false, 00:31:43.187 "nvme_iov_md": false 00:31:43.187 }, 00:31:43.187 "driver_specific": { 00:31:43.187 "lvol": { 00:31:43.187 "lvol_store_uuid": "c8d8c85a-2bcf-4166-b60d-c7530c7252c1", 00:31:43.187 "base_bdev": "aio_bdev", 00:31:43.187 "thin_provision": false, 00:31:43.187 "num_allocated_clusters": 38, 00:31:43.187 "snapshot": false, 00:31:43.187 "clone": false, 00:31:43.187 "esnap_clone": false 00:31:43.187 } 00:31:43.187 } 00:31:43.187 } 00:31:43.187 ] 00:31:43.187 14:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:31:43.187 14:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8d8c85a-2bcf-4166-b60d-c7530c7252c1 00:31:43.187 14:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:43.448 14:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:43.448 14:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8d8c85a-2bcf-4166-b60d-c7530c7252c1 00:31:43.448 14:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:43.709 14:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:43.709 14:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 25ca4517-2394-42ab-bc23-d9db55aa90c9 00:31:43.710 14:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c8d8c85a-2bcf-4166-b60d-c7530c7252c1 00:31:43.970 14:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:44.231 14:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:44.231 00:31:44.231 real 0m15.832s 00:31:44.231 user 0m15.440s 00:31:44.231 sys 0m1.429s 00:31:44.231 14:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:44.231 14:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:44.231 ************************************ 00:31:44.231 END TEST lvs_grow_clean 00:31:44.231 ************************************ 00:31:44.231 14:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:31:44.231 14:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:44.231 14:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:44.231 14:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:44.231 ************************************ 00:31:44.231 START TEST lvs_grow_dirty 00:31:44.231 ************************************ 00:31:44.231 14:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:31:44.231 14:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:44.231 14:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:44.231 14:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:44.231 14:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:44.231 14:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:44.231 14:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:44.231 14:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:44.231 14:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:44.231 14:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:44.492 14:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:44.492 14:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:44.753 14:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=fd884423-5980-46b5-9ce8-6034128d3e2c 00:31:44.753 14:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd884423-5980-46b5-9ce8-6034128d3e2c 00:31:44.753 14:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:44.753 14:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:44.753 14:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:44.753 14:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fd884423-5980-46b5-9ce8-6034128d3e2c lvol 150 00:31:45.014 14:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d6b750ef-c2b3-465d-a2f2-387e0417feea 00:31:45.014 14:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:45.014 14:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:45.274 [2024-10-30 14:18:43.367348] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:45.274 [2024-10-30 14:18:43.367492] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:45.274 true 00:31:45.274 14:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd884423-5980-46b5-9ce8-6034128d3e2c 00:31:45.274 14:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:45.274 14:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:45.274 14:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:45.568 14:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d6b750ef-c2b3-465d-a2f2-387e0417feea 00:31:45.851 14:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:45.851 [2024-10-30 14:18:44.023869] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:45.851 14:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:46.112 14:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1255308 00:31:46.112 14:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:46.112 14:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:46.112 14:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1255308 /var/tmp/bdevperf.sock 00:31:46.112 14:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1255308 ']' 00:31:46.112 14:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:46.112 14:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:46.112 14:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:46.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:46.112 14:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:46.112 14:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:46.112 [2024-10-30 14:18:44.273002] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:31:46.112 [2024-10-30 14:18:44.273060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255308 ] 00:31:46.112 [2024-10-30 14:18:44.357148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.112 [2024-10-30 14:18:44.387383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.051 14:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:47.051 14:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:47.051 14:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:47.051 Nvme0n1 00:31:47.051 14:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:47.311 [ 00:31:47.311 { 00:31:47.311 "name": "Nvme0n1", 00:31:47.311 "aliases": [ 00:31:47.311 "d6b750ef-c2b3-465d-a2f2-387e0417feea" 00:31:47.311 ], 00:31:47.311 "product_name": "NVMe disk", 00:31:47.311 "block_size": 4096, 00:31:47.311 "num_blocks": 38912, 00:31:47.311 "uuid": "d6b750ef-c2b3-465d-a2f2-387e0417feea", 00:31:47.311 "numa_id": 0, 00:31:47.311 "assigned_rate_limits": { 00:31:47.311 "rw_ios_per_sec": 0, 00:31:47.311 "rw_mbytes_per_sec": 0, 00:31:47.311 "r_mbytes_per_sec": 0, 00:31:47.311 "w_mbytes_per_sec": 0 00:31:47.311 }, 00:31:47.311 "claimed": false, 00:31:47.311 "zoned": false, 00:31:47.311 "supported_io_types": { 00:31:47.311 "read": true, 00:31:47.311 "write": true, 00:31:47.311 "unmap": true, 00:31:47.311 "flush": true, 00:31:47.311 "reset": true, 00:31:47.311 "nvme_admin": true, 00:31:47.311 "nvme_io": true, 00:31:47.311 "nvme_io_md": false, 00:31:47.311 "write_zeroes": true, 00:31:47.311 "zcopy": false, 00:31:47.311 "get_zone_info": false, 00:31:47.311 "zone_management": false, 00:31:47.311 "zone_append": false, 00:31:47.311 "compare": true, 00:31:47.311 "compare_and_write": true, 00:31:47.311 "abort": true, 00:31:47.311 "seek_hole": false, 00:31:47.311 "seek_data": false, 00:31:47.311 "copy": true, 00:31:47.311 "nvme_iov_md": false 00:31:47.311 }, 00:31:47.311 "memory_domains": [ 00:31:47.311 { 00:31:47.311 "dma_device_id": "system", 00:31:47.311 "dma_device_type": 1 00:31:47.311 } 00:31:47.311 ], 00:31:47.311 "driver_specific": { 00:31:47.311 "nvme": [ 00:31:47.311 { 00:31:47.311 "trid": { 00:31:47.311 "trtype": "TCP", 00:31:47.311 "adrfam": "IPv4", 00:31:47.311 "traddr": "10.0.0.2", 00:31:47.311 "trsvcid": "4420", 00:31:47.311 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:47.311 }, 00:31:47.311 "ctrlr_data": { 00:31:47.311 "cntlid": 1, 00:31:47.311 "vendor_id": "0x8086", 00:31:47.311 "model_number": "SPDK bdev Controller", 00:31:47.311 "serial_number": "SPDK0", 00:31:47.311 "firmware_revision": "25.01", 00:31:47.311 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:47.311 "oacs": { 00:31:47.311 "security": 0, 00:31:47.311 "format": 0, 00:31:47.311 "firmware": 0, 00:31:47.311 "ns_manage": 0 00:31:47.311 }, 00:31:47.311 "multi_ctrlr": true, 00:31:47.312 "ana_reporting": false 00:31:47.312 }, 00:31:47.312 "vs": { 00:31:47.312 "nvme_version": "1.3" 00:31:47.312 }, 00:31:47.312 "ns_data": { 00:31:47.312 "id": 1, 00:31:47.312 "can_share": true 00:31:47.312 } 00:31:47.312 } 00:31:47.312 ], 00:31:47.312 "mp_policy": "active_passive" 00:31:47.312 } 00:31:47.312 } 00:31:47.312 ] 00:31:47.312 14:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1255625 00:31:47.312 14:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:47.312 14:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:47.312 Running I/O for 10 seconds... 00:31:48.254 Latency(us) 00:31:48.254 [2024-10-30T13:18:46.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:48.254 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:48.254 Nvme0n1 : 1.00 17466.00 68.23 0.00 0.00 0.00 0.00 0.00 00:31:48.254 [2024-10-30T13:18:46.553Z] =================================================================================================================== 00:31:48.254 [2024-10-30T13:18:46.553Z] Total : 17466.00 68.23 0.00 0.00 0.00 0.00 0.00 00:31:48.254 00:31:49.194 14:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fd884423-5980-46b5-9ce8-6034128d3e2c 00:31:49.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:49.453 Nvme0n1 : 2.00 17661.50 68.99 0.00 0.00 0.00 0.00 0.00 00:31:49.453 [2024-10-30T13:18:47.752Z] =================================================================================================================== 00:31:49.453 [2024-10-30T13:18:47.752Z] Total : 17661.50 68.99 0.00 0.00 0.00 0.00 0.00 00:31:49.453 00:31:49.453 true 00:31:49.453 14:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd884423-5980-46b5-9ce8-6034128d3e2c 00:31:49.454 14:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:49.713 14:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:49.713 14:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:49.713 14:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1255625 00:31:50.284 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:50.284 Nvme0n1 : 3.00 17747.67 69.33 0.00 0.00 0.00 0.00 0.00 00:31:50.284 [2024-10-30T13:18:48.583Z] =================================================================================================================== 00:31:50.284 [2024-10-30T13:18:48.583Z] Total : 17747.67 69.33 0.00 0.00 0.00 0.00 0.00 00:31:50.284 00:31:51.668 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:51.668 Nvme0n1 : 4.00 17799.25 69.53 0.00 0.00 0.00 0.00 0.00 00:31:51.668 [2024-10-30T13:18:49.967Z] =================================================================================================================== 00:31:51.668 [2024-10-30T13:18:49.967Z] Total : 17799.25 69.53 0.00 0.00 0.00 0.00 0.00 00:31:51.668 00:31:52.607 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:52.607 Nvme0n1 : 5.00 18725.20 73.15 0.00 0.00 0.00 0.00 0.00 00:31:52.607 [2024-10-30T13:18:50.906Z] =================================================================================================================== 00:31:52.607 [2024-10-30T13:18:50.906Z] Total : 18725.20 73.15 0.00 0.00 0.00 0.00 0.00 00:31:52.607 00:31:53.549 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:53.549 Nvme0n1 : 6.00 19839.17 77.50 0.00 0.00 0.00 0.00 0.00 00:31:53.549 [2024-10-30T13:18:51.848Z] =================================================================================================================== 00:31:53.549 [2024-10-30T13:18:51.848Z] Total : 19839.17 77.50 0.00 0.00 0.00 0.00 0.00 00:31:53.549 00:31:54.490 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:54.490 Nvme0n1 : 7.00 20634.57 80.60 0.00 0.00 0.00 0.00 0.00 00:31:54.490 [2024-10-30T13:18:52.789Z] =================================================================================================================== 00:31:54.490 [2024-10-30T13:18:52.789Z] Total : 20634.57 80.60 0.00 0.00 0.00 0.00 0.00 00:31:54.490 00:31:55.432 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:55.432 Nvme0n1 : 8.00 21231.38 82.94 0.00 0.00 0.00 0.00 0.00 00:31:55.432 [2024-10-30T13:18:53.731Z] =================================================================================================================== 00:31:55.432 [2024-10-30T13:18:53.731Z] Total : 21231.38 82.94 0.00 0.00 0.00 0.00 0.00 00:31:55.432 00:31:56.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:56.375 Nvme0n1 : 9.00 21702.00 84.77 0.00 0.00 0.00 0.00 0.00 00:31:56.375 [2024-10-30T13:18:54.674Z] =================================================================================================================== 00:31:56.375 [2024-10-30T13:18:54.674Z] Total : 21702.00 84.77 0.00 0.00 0.00 0.00 0.00 00:31:56.375 00:31:57.318 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:57.318 Nvme0n1 : 10.00 22072.50 86.22 0.00 0.00 0.00 0.00 0.00 00:31:57.318 [2024-10-30T13:18:55.617Z] =================================================================================================================== 00:31:57.318 [2024-10-30T13:18:55.617Z] Total : 22072.50 86.22 0.00 0.00 0.00 0.00 0.00 00:31:57.318 00:31:57.318 00:31:57.318 Latency(us) 00:31:57.318 [2024-10-30T13:18:55.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:57.318 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:57.318 Nvme0n1 : 10.00 22076.65 86.24 0.00 0.00 5794.89 3126.61 27852.80 00:31:57.318 [2024-10-30T13:18:55.617Z] =================================================================================================================== 00:31:57.318 [2024-10-30T13:18:55.617Z] Total : 22076.65 86.24 0.00 0.00 5794.89 3126.61 27852.80 00:31:57.318 { 00:31:57.318 "results": [ 00:31:57.318 { 00:31:57.318 "job": "Nvme0n1", 00:31:57.318 "core_mask": "0x2", 00:31:57.318 "workload": "randwrite", 00:31:57.318 "status": "finished", 00:31:57.318 "queue_depth": 128, 00:31:57.318 "io_size": 4096, 00:31:57.318 "runtime": 10.003917, 00:31:57.318 "iops": 22076.6525751863, 00:31:57.318 "mibps": 86.23692412182149, 00:31:57.318 "io_failed": 0, 00:31:57.318 "io_timeout": 0, 00:31:57.318 "avg_latency_us": 5794.892810934573, 00:31:57.318 "min_latency_us": 3126.6133333333332, 00:31:57.318 "max_latency_us": 27852.8 00:31:57.318 } 00:31:57.318 ], 00:31:57.318 "core_count": 1 00:31:57.318 } 00:31:57.318 14:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1255308 00:31:57.318 14:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1255308 ']' 00:31:57.318 14:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1255308 00:31:57.318 14:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:31:57.318 14:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:57.318 14:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1255308 00:31:57.580 14:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:57.580 14:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:57.580 14:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1255308' 00:31:57.580 killing process with pid 1255308 00:31:57.580 14:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1255308 00:31:57.580 Received shutdown signal, test time was about 10.000000 seconds 00:31:57.580 00:31:57.580 Latency(us) 00:31:57.580 [2024-10-30T13:18:55.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:57.580 [2024-10-30T13:18:55.879Z] =================================================================================================================== 00:31:57.580 [2024-10-30T13:18:55.879Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:57.580 14:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1255308 00:31:57.580 14:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:57.841 14:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:57.841 14:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd884423-5980-46b5-9ce8-6034128d3e2c 00:31:57.841 14:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:58.102 14:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:58.102 14:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:31:58.102 14:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1251828 00:31:58.102 14:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1251828 00:31:58.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1251828 Killed "${NVMF_APP[@]}" "$@" 00:31:58.102 14:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:31:58.102 14:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:31:58.102 14:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:58.102 14:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:58.102 14:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:58.102 14:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1257644 00:31:58.102 14:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1257644 00:31:58.102 14:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:58.102 14:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1257644 ']' 00:31:58.102 14:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:58.102 14:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:58.102 14:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:58.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:58.102 14:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:58.102 14:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:58.102 [2024-10-30 14:18:56.368847] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:58.102 [2024-10-30 14:18:56.369832] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:31:58.102 [2024-10-30 14:18:56.369874] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:58.362 [2024-10-30 14:18:56.462519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.362 [2024-10-30 14:18:56.492343] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:58.362 [2024-10-30 14:18:56.492374] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:58.362 [2024-10-30 14:18:56.492380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:58.362 [2024-10-30 14:18:56.492385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:58.362 [2024-10-30 14:18:56.492389] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:58.362 [2024-10-30 14:18:56.492841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.362 [2024-10-30 14:18:56.542331] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:58.362 [2024-10-30 14:18:56.542525] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:58.934 14:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:58.934 14:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:58.934 14:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:58.934 14:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:58.934 14:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:59.194 14:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:59.194 14:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:59.194 [2024-10-30 14:18:57.407311] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:59.194 [2024-10-30 14:18:57.407569] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:59.194 [2024-10-30 14:18:57.407668] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:59.194 14:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:31:59.194 14:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d6b750ef-c2b3-465d-a2f2-387e0417feea 00:31:59.194 14:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d6b750ef-c2b3-465d-a2f2-387e0417feea 00:31:59.194 14:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:59.194 14:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:59.194 14:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:59.194 14:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:59.194 14:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:59.456 14:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d6b750ef-c2b3-465d-a2f2-387e0417feea -t 2000 00:31:59.718 [ 00:31:59.718 { 00:31:59.718 "name": "d6b750ef-c2b3-465d-a2f2-387e0417feea", 00:31:59.718 "aliases": [ 00:31:59.718 "lvs/lvol" 00:31:59.718 ], 00:31:59.718 "product_name": "Logical Volume", 00:31:59.718 "block_size": 4096, 00:31:59.718 "num_blocks": 38912, 00:31:59.718 "uuid": "d6b750ef-c2b3-465d-a2f2-387e0417feea", 00:31:59.718 "assigned_rate_limits": { 00:31:59.718 "rw_ios_per_sec": 0, 00:31:59.718 "rw_mbytes_per_sec": 0, 00:31:59.718 "r_mbytes_per_sec": 0, 00:31:59.718 "w_mbytes_per_sec": 0 00:31:59.718 }, 00:31:59.718 "claimed": false, 00:31:59.718 "zoned": false, 00:31:59.718 "supported_io_types": { 00:31:59.718 "read": true, 00:31:59.718 "write": true, 00:31:59.718 "unmap": true, 00:31:59.718 "flush": false, 00:31:59.718 "reset": true, 00:31:59.718 "nvme_admin": false, 00:31:59.718 "nvme_io": false, 00:31:59.718 "nvme_io_md": false, 00:31:59.718 "write_zeroes": true, 00:31:59.718 "zcopy": false, 00:31:59.718 "get_zone_info": false, 00:31:59.718 "zone_management": false, 00:31:59.718 "zone_append": false, 00:31:59.718 "compare": false, 00:31:59.718 "compare_and_write": false, 00:31:59.718 "abort": false, 00:31:59.718 "seek_hole": true, 00:31:59.718 "seek_data": true, 00:31:59.718 "copy": false, 00:31:59.718 "nvme_iov_md": false 00:31:59.718 }, 00:31:59.718 "driver_specific": { 00:31:59.718 "lvol": { 00:31:59.718 "lvol_store_uuid": "fd884423-5980-46b5-9ce8-6034128d3e2c", 00:31:59.718 "base_bdev": "aio_bdev", 00:31:59.718 "thin_provision": false, 00:31:59.718 "num_allocated_clusters": 38, 00:31:59.718 "snapshot": false, 00:31:59.718 "clone": false, 00:31:59.718 "esnap_clone": false 00:31:59.718 } 00:31:59.718 } 00:31:59.718 } 00:31:59.718 ] 00:31:59.718 14:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:59.718 14:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd884423-5980-46b5-9ce8-6034128d3e2c 00:31:59.718 14:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:31:59.718 14:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:31:59.718 14:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:31:59.718 14:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd884423-5980-46b5-9ce8-6034128d3e2c 00:31:59.979 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:31:59.979 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:59.979 [2024-10-30 14:18:58.261302] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:00.240 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd884423-5980-46b5-9ce8-6034128d3e2c 00:32:00.240 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:32:00.240 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd884423-5980-46b5-9ce8-6034128d3e2c 00:32:00.240 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:00.240 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:00.240 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:00.240 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:00.240 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:00.240 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:00.240 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:00.240 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:00.240 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd884423-5980-46b5-9ce8-6034128d3e2c 00:32:00.240 request: 00:32:00.240 { 00:32:00.240 "uuid": "fd884423-5980-46b5-9ce8-6034128d3e2c", 00:32:00.240 "method": "bdev_lvol_get_lvstores", 00:32:00.240 "req_id": 1 00:32:00.240 } 00:32:00.240 Got JSON-RPC error response 00:32:00.240 response: 00:32:00.240 { 00:32:00.240 "code": -19, 00:32:00.240 "message": "No such device" 00:32:00.240 } 00:32:00.240 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:32:00.240 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:00.240 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:00.240 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:00.240 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:00.501 aio_bdev 00:32:00.501 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d6b750ef-c2b3-465d-a2f2-387e0417feea 00:32:00.501 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d6b750ef-c2b3-465d-a2f2-387e0417feea 00:32:00.501 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:00.501 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:00.501 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:00.501 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:00.501 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:00.762 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d6b750ef-c2b3-465d-a2f2-387e0417feea -t 2000 00:32:00.762 [ 00:32:00.762 { 00:32:00.762 "name": "d6b750ef-c2b3-465d-a2f2-387e0417feea", 00:32:00.762 "aliases": [ 00:32:00.762 "lvs/lvol" 00:32:00.762 ], 00:32:00.762 "product_name": "Logical Volume", 00:32:00.762 "block_size": 4096, 00:32:00.762 "num_blocks": 38912, 00:32:00.762 "uuid": "d6b750ef-c2b3-465d-a2f2-387e0417feea", 00:32:00.762 "assigned_rate_limits": { 00:32:00.762 "rw_ios_per_sec": 0, 00:32:00.762 "rw_mbytes_per_sec": 0, 00:32:00.762 "r_mbytes_per_sec": 0, 00:32:00.762 "w_mbytes_per_sec": 0 00:32:00.762 }, 00:32:00.762 "claimed": false, 00:32:00.762 "zoned": false, 00:32:00.762 "supported_io_types": { 00:32:00.762 "read": true, 00:32:00.762 "write": true, 00:32:00.762 "unmap": true, 00:32:00.762 "flush": false, 00:32:00.762 "reset": true, 00:32:00.762 "nvme_admin": false, 00:32:00.762 "nvme_io": false, 00:32:00.762 "nvme_io_md": false, 00:32:00.762 "write_zeroes": true, 00:32:00.762 "zcopy": false, 00:32:00.762 "get_zone_info": false, 00:32:00.762 "zone_management": false, 00:32:00.762 "zone_append": false, 00:32:00.762 "compare": false, 00:32:00.762 "compare_and_write": false, 00:32:00.762 "abort": false, 00:32:00.762 "seek_hole": true, 00:32:00.762 "seek_data": true, 00:32:00.762 "copy": false, 00:32:00.762 "nvme_iov_md": false 00:32:00.762 }, 00:32:00.762 "driver_specific": { 00:32:00.762 "lvol": { 00:32:00.762 "lvol_store_uuid": "fd884423-5980-46b5-9ce8-6034128d3e2c", 00:32:00.762 "base_bdev": "aio_bdev", 00:32:00.762 "thin_provision": false, 00:32:00.762 "num_allocated_clusters": 38, 00:32:00.763 "snapshot": false, 00:32:00.763 "clone": false, 00:32:00.763 "esnap_clone": false 00:32:00.763 } 00:32:00.763 } 00:32:00.763 } 00:32:00.763 ] 00:32:00.763 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:00.763 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd884423-5980-46b5-9ce8-6034128d3e2c 00:32:00.763 14:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:01.023 14:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:01.023 14:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd884423-5980-46b5-9ce8-6034128d3e2c 00:32:01.023 14:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:01.285 14:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:01.285 14:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d6b750ef-c2b3-465d-a2f2-387e0417feea 00:32:01.285 14:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fd884423-5980-46b5-9ce8-6034128d3e2c 00:32:01.546 14:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:01.806 14:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:01.806 00:32:01.806 real 0m17.467s 00:32:01.806 user 0m35.328s 00:32:01.806 sys 0m3.151s 00:32:01.806 14:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:01.806 14:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:01.806 ************************************ 00:32:01.806 END TEST lvs_grow_dirty 00:32:01.806 ************************************ 00:32:01.806 14:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:01.806 14:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:32:01.806 14:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:32:01.806 14:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:32:01.806 14:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:01.806 14:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:32:01.806 14:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:32:01.806 14:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:32:01.806 14:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:01.806 nvmf_trace.0 00:32:01.806 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:32:01.806 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:01.806 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:01.806 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:01.806 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:01.806 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:01.806 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:01.806 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:01.806 rmmod nvme_tcp 00:32:01.806 rmmod nvme_fabrics 00:32:01.806 rmmod nvme_keyring 00:32:01.806 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:01.806 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:01.806 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:01.806 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1257644 ']' 00:32:01.806 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1257644 00:32:01.806 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1257644 ']' 00:32:01.806 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1257644 00:32:01.806 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:32:01.806 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:01.806 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1257644 00:32:02.067 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:02.067 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:02.067 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1257644' 00:32:02.067 killing process with pid 1257644 00:32:02.067 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1257644 00:32:02.067 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1257644 00:32:02.067 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:02.067 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:02.067 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:02.067 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:02.067 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:02.067 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:02.067 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:02.067 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:02.067 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:02.067 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.067 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:02.067 14:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.613 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:04.613 00:32:04.613 real 0m44.547s 00:32:04.613 user 0m53.824s 00:32:04.613 sys 0m10.508s 00:32:04.613 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:04.613 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:04.613 ************************************ 00:32:04.613 END TEST nvmf_lvs_grow 00:32:04.613 ************************************ 00:32:04.613 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:04.613 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:04.613 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:04.613 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:04.613 ************************************ 00:32:04.613 START TEST nvmf_bdev_io_wait 00:32:04.613 ************************************ 00:32:04.613 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:04.613 * Looking for test storage... 00:32:04.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:04.613 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:04.613 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:32:04.613 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:04.613 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:04.613 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:04.613 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:04.613 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:04.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.614 --rc genhtml_branch_coverage=1 00:32:04.614 --rc genhtml_function_coverage=1 00:32:04.614 --rc genhtml_legend=1 00:32:04.614 --rc geninfo_all_blocks=1 00:32:04.614 --rc geninfo_unexecuted_blocks=1 00:32:04.614 00:32:04.614 ' 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:04.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.614 --rc genhtml_branch_coverage=1 00:32:04.614 --rc genhtml_function_coverage=1 00:32:04.614 --rc genhtml_legend=1 00:32:04.614 --rc geninfo_all_blocks=1 00:32:04.614 --rc geninfo_unexecuted_blocks=1 00:32:04.614 00:32:04.614 ' 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:04.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.614 --rc genhtml_branch_coverage=1 00:32:04.614 --rc genhtml_function_coverage=1 00:32:04.614 --rc genhtml_legend=1 00:32:04.614 --rc geninfo_all_blocks=1 00:32:04.614 --rc geninfo_unexecuted_blocks=1 00:32:04.614 00:32:04.614 ' 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:04.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.614 --rc genhtml_branch_coverage=1 00:32:04.614 --rc genhtml_function_coverage=1 00:32:04.614 --rc genhtml_legend=1 00:32:04.614 --rc geninfo_all_blocks=1 00:32:04.614 --rc geninfo_unexecuted_blocks=1 00:32:04.614 00:32:04.614 ' 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:04.614 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.615 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:04.615 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.615 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:04.615 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:04.615 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:04.615 14:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:12.755 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:12.755 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:12.755 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:12.755 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:12.755 14:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:12.755 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:12.755 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:12.755 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:12.755 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:12.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:12.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:32:12.756 00:32:12.756 --- 10.0.0.2 ping statistics --- 00:32:12.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.756 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:32:12.756 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:12.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:12.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:32:12.756 00:32:12.756 --- 10.0.0.1 ping statistics --- 00:32:12.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.756 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:32:12.756 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:12.756 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:32:12.756 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:12.756 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:12.756 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:12.756 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:12.756 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:12.756 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:12.756 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:12.756 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:12.756 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:12.756 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:12.756 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:12.756 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1262630 00:32:12.756 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1262630 00:32:12.756 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:12.756 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1262630 ']' 00:32:12.756 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:12.756 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:12.756 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:12.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:12.756 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:12.756 14:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:12.756 [2024-10-30 14:19:10.225126] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:12.756 [2024-10-30 14:19:10.226314] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:32:12.756 [2024-10-30 14:19:10.226371] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:12.756 [2024-10-30 14:19:10.326719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:12.756 [2024-10-30 14:19:10.380894] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:12.756 [2024-10-30 14:19:10.380950] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:12.756 [2024-10-30 14:19:10.380958] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:12.756 [2024-10-30 14:19:10.380965] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:12.756 [2024-10-30 14:19:10.380971] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:12.756 [2024-10-30 14:19:10.383062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:12.756 [2024-10-30 14:19:10.383222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:12.756 [2024-10-30 14:19:10.383381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.756 [2024-10-30 14:19:10.383381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:12.756 [2024-10-30 14:19:10.383730] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:12.756 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:12.756 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:32:12.756 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:12.756 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:12.756 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:13.016 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:13.016 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:13.016 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.016 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:13.017 [2024-10-30 14:19:11.150909] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:13.017 [2024-10-30 14:19:11.151840] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:13.017 [2024-10-30 14:19:11.152138] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:13.017 [2024-10-30 14:19:11.152270] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:13.017 [2024-10-30 14:19:11.163977] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:13.017 Malloc0 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:13.017 [2024-10-30 14:19:11.240568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1262729 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1262731 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:13.017 { 00:32:13.017 "params": { 00:32:13.017 "name": "Nvme$subsystem", 00:32:13.017 "trtype": "$TEST_TRANSPORT", 00:32:13.017 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:13.017 "adrfam": "ipv4", 00:32:13.017 "trsvcid": "$NVMF_PORT", 00:32:13.017 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:13.017 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:13.017 "hdgst": ${hdgst:-false}, 00:32:13.017 "ddgst": ${ddgst:-false} 00:32:13.017 }, 00:32:13.017 "method": "bdev_nvme_attach_controller" 00:32:13.017 } 00:32:13.017 EOF 00:32:13.017 )") 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1262733 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:13.017 { 00:32:13.017 "params": { 00:32:13.017 "name": "Nvme$subsystem", 00:32:13.017 "trtype": "$TEST_TRANSPORT", 00:32:13.017 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:13.017 "adrfam": "ipv4", 00:32:13.017 "trsvcid": "$NVMF_PORT", 00:32:13.017 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:13.017 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:13.017 "hdgst": ${hdgst:-false}, 00:32:13.017 "ddgst": ${ddgst:-false} 00:32:13.017 }, 00:32:13.017 "method": "bdev_nvme_attach_controller" 00:32:13.017 } 00:32:13.017 EOF 00:32:13.017 )") 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1262736 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:13.017 { 00:32:13.017 "params": { 00:32:13.017 "name": "Nvme$subsystem", 00:32:13.017 "trtype": "$TEST_TRANSPORT", 00:32:13.017 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:13.017 "adrfam": "ipv4", 00:32:13.017 "trsvcid": "$NVMF_PORT", 00:32:13.017 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:13.017 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:13.017 "hdgst": ${hdgst:-false}, 00:32:13.017 "ddgst": ${ddgst:-false} 00:32:13.017 }, 00:32:13.017 "method": "bdev_nvme_attach_controller" 00:32:13.017 } 00:32:13.017 EOF 00:32:13.017 )") 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:13.017 { 00:32:13.017 "params": { 00:32:13.017 "name": "Nvme$subsystem", 00:32:13.017 "trtype": "$TEST_TRANSPORT", 00:32:13.017 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:13.017 "adrfam": "ipv4", 00:32:13.017 "trsvcid": "$NVMF_PORT", 00:32:13.017 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:13.017 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:13.017 "hdgst": ${hdgst:-false}, 00:32:13.017 "ddgst": ${ddgst:-false} 00:32:13.017 }, 00:32:13.017 "method": "bdev_nvme_attach_controller" 00:32:13.017 } 00:32:13.017 EOF 00:32:13.017 )") 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1262729 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:13.017 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:13.017 "params": { 00:32:13.017 "name": "Nvme1", 00:32:13.017 "trtype": "tcp", 00:32:13.017 "traddr": "10.0.0.2", 00:32:13.017 "adrfam": "ipv4", 00:32:13.017 "trsvcid": "4420", 00:32:13.017 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:13.017 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:13.017 "hdgst": false, 00:32:13.017 "ddgst": false 00:32:13.017 }, 00:32:13.017 "method": "bdev_nvme_attach_controller" 00:32:13.018 }' 00:32:13.018 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:13.018 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:13.018 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:13.018 "params": { 00:32:13.018 "name": "Nvme1", 00:32:13.018 "trtype": "tcp", 00:32:13.018 "traddr": "10.0.0.2", 00:32:13.018 "adrfam": "ipv4", 00:32:13.018 "trsvcid": "4420", 00:32:13.018 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:13.018 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:13.018 "hdgst": false, 00:32:13.018 "ddgst": false 00:32:13.018 }, 00:32:13.018 "method": "bdev_nvme_attach_controller" 00:32:13.018 }' 00:32:13.018 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:13.018 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:13.018 "params": { 00:32:13.018 "name": "Nvme1", 00:32:13.018 "trtype": "tcp", 00:32:13.018 "traddr": "10.0.0.2", 00:32:13.018 "adrfam": "ipv4", 00:32:13.018 "trsvcid": "4420", 00:32:13.018 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:13.018 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:13.018 "hdgst": false, 00:32:13.018 "ddgst": false 00:32:13.018 }, 00:32:13.018 "method": "bdev_nvme_attach_controller" 00:32:13.018 }' 00:32:13.018 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:13.018 14:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:13.018 "params": { 00:32:13.018 "name": "Nvme1", 00:32:13.018 "trtype": "tcp", 00:32:13.018 "traddr": "10.0.0.2", 00:32:13.018 "adrfam": "ipv4", 00:32:13.018 "trsvcid": "4420", 00:32:13.018 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:13.018 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:13.018 "hdgst": false, 00:32:13.018 "ddgst": false 00:32:13.018 }, 00:32:13.018 "method": "bdev_nvme_attach_controller" 00:32:13.018 }' 00:32:13.018 [2024-10-30 14:19:11.299081] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:32:13.018 [2024-10-30 14:19:11.299081] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:32:13.018 [2024-10-30 14:19:11.299155] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-30 14:19:11.299156] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:13.018 --proc-type=auto ] 00:32:13.018 [2024-10-30 14:19:11.300273] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:32:13.018 [2024-10-30 14:19:11.300327] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:13.018 [2024-10-30 14:19:11.303778] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:32:13.018 [2024-10-30 14:19:11.303851] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:13.278 [2024-10-30 14:19:11.520209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.278 [2024-10-30 14:19:11.560524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:13.539 [2024-10-30 14:19:11.609970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.539 [2024-10-30 14:19:11.649491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:13.539 [2024-10-30 14:19:11.703276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.539 [2024-10-30 14:19:11.746452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:13.539 [2024-10-30 14:19:11.773644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.539 [2024-10-30 14:19:11.811151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:13.800 Running I/O for 1 seconds... 00:32:13.800 Running I/O for 1 seconds... 00:32:13.800 Running I/O for 1 seconds... 00:32:13.800 Running I/O for 1 seconds... 00:32:14.742 7328.00 IOPS, 28.62 MiB/s 00:32:14.742 Latency(us) 00:32:14.742 [2024-10-30T13:19:13.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.742 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:14.742 Nvme1n1 : 1.02 7294.43 28.49 0.00 0.00 17357.01 5024.43 28398.93 00:32:14.742 [2024-10-30T13:19:13.041Z] =================================================================================================================== 00:32:14.742 [2024-10-30T13:19:13.041Z] Total : 7294.43 28.49 0.00 0.00 17357.01 5024.43 28398.93 00:32:14.742 10822.00 IOPS, 42.27 MiB/s [2024-10-30T13:19:13.041Z] 6963.00 IOPS, 27.20 MiB/s 00:32:14.742 Latency(us) 00:32:14.742 [2024-10-30T13:19:13.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.742 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:14.742 Nvme1n1 : 1.01 10859.31 42.42 0.00 0.00 11733.77 5434.03 16930.13 00:32:14.742 [2024-10-30T13:19:13.041Z] =================================================================================================================== 00:32:14.742 [2024-10-30T13:19:13.041Z] Total : 10859.31 42.42 0.00 0.00 11733.77 5434.03 16930.13 00:32:14.742 00:32:14.742 Latency(us) 00:32:14.742 [2024-10-30T13:19:13.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.742 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:14.742 Nvme1n1 : 1.01 7074.40 27.63 0.00 0.00 18039.17 4887.89 34078.72 00:32:14.742 [2024-10-30T13:19:13.041Z] =================================================================================================================== 00:32:14.742 [2024-10-30T13:19:13.041Z] Total : 7074.40 27.63 0.00 0.00 18039.17 4887.89 34078.72 00:32:14.742 188336.00 IOPS, 735.69 MiB/s 00:32:14.742 Latency(us) 00:32:14.742 [2024-10-30T13:19:13.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.742 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:14.742 Nvme1n1 : 1.00 187961.50 734.22 0.00 0.00 677.42 302.08 1966.08 00:32:14.743 [2024-10-30T13:19:13.042Z] =================================================================================================================== 00:32:14.743 [2024-10-30T13:19:13.042Z] Total : 187961.50 734.22 0.00 0.00 677.42 302.08 1966.08 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1262731 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1262733 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1262736 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:15.004 rmmod nvme_tcp 00:32:15.004 rmmod nvme_fabrics 00:32:15.004 rmmod nvme_keyring 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1262630 ']' 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1262630 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1262630 ']' 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1262630 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1262630 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1262630' 00:32:15.004 killing process with pid 1262630 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1262630 00:32:15.004 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1262630 00:32:15.265 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:15.265 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:15.265 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:15.265 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:15.265 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:32:15.265 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:15.265 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:32:15.265 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:15.265 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:15.265 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.265 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:15.265 14:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:17.818 00:32:17.818 real 0m13.044s 00:32:17.818 user 0m15.494s 00:32:17.818 sys 0m7.877s 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:17.818 ************************************ 00:32:17.818 END TEST nvmf_bdev_io_wait 00:32:17.818 ************************************ 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:17.818 ************************************ 00:32:17.818 START TEST nvmf_queue_depth 00:32:17.818 ************************************ 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:17.818 * Looking for test storage... 00:32:17.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:17.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.818 --rc genhtml_branch_coverage=1 00:32:17.818 --rc genhtml_function_coverage=1 00:32:17.818 --rc genhtml_legend=1 00:32:17.818 --rc geninfo_all_blocks=1 00:32:17.818 --rc geninfo_unexecuted_blocks=1 00:32:17.818 00:32:17.818 ' 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:17.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.818 --rc genhtml_branch_coverage=1 00:32:17.818 --rc genhtml_function_coverage=1 00:32:17.818 --rc genhtml_legend=1 00:32:17.818 --rc geninfo_all_blocks=1 00:32:17.818 --rc geninfo_unexecuted_blocks=1 00:32:17.818 00:32:17.818 ' 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:17.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.818 --rc genhtml_branch_coverage=1 00:32:17.818 --rc genhtml_function_coverage=1 00:32:17.818 --rc genhtml_legend=1 00:32:17.818 --rc geninfo_all_blocks=1 00:32:17.818 --rc geninfo_unexecuted_blocks=1 00:32:17.818 00:32:17.818 ' 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:17.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.818 --rc genhtml_branch_coverage=1 00:32:17.818 --rc genhtml_function_coverage=1 00:32:17.818 --rc genhtml_legend=1 00:32:17.818 --rc geninfo_all_blocks=1 00:32:17.818 --rc geninfo_unexecuted_blocks=1 00:32:17.818 00:32:17.818 ' 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:17.818 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:32:17.819 14:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:25.966 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:25.966 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:32:25.966 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:25.966 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:25.966 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:25.966 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:25.966 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:25.966 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:32:25.966 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:25.966 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:32:25.966 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:32:25.966 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:32:25.966 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:32:25.966 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:32:25.966 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:32:25.966 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:25.966 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:25.966 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:25.966 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:25.966 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:25.966 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:25.967 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:25.967 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:25.967 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:25.967 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:25.967 14:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:25.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:25.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:32:25.967 00:32:25.967 --- 10.0.0.2 ping statistics --- 00:32:25.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:25.967 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:25.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:25.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:32:25.967 00:32:25.967 --- 10.0.0.1 ping statistics --- 00:32:25.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:25.967 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1267414 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1267414 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:25.967 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1267414 ']' 00:32:25.968 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.968 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:25.968 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.968 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:25.968 14:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:25.968 [2024-10-30 14:19:23.358018] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:25.968 [2024-10-30 14:19:23.359102] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:32:25.968 [2024-10-30 14:19:23.359150] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:25.968 [2024-10-30 14:19:23.463650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.968 [2024-10-30 14:19:23.514336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:25.968 [2024-10-30 14:19:23.514389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:25.968 [2024-10-30 14:19:23.514397] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:25.968 [2024-10-30 14:19:23.514404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:25.968 [2024-10-30 14:19:23.514411] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:25.968 [2024-10-30 14:19:23.515159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:25.968 [2024-10-30 14:19:23.591261] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:25.968 [2024-10-30 14:19:23.591560] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:25.968 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:25.968 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:32:25.968 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:25.968 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:25.968 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:25.968 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:25.968 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:25.968 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.968 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:25.968 [2024-10-30 14:19:24.236019] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:25.968 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.968 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:25.968 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.968 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:26.229 Malloc0 00:32:26.229 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.229 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:26.229 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.229 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:26.229 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.229 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:26.229 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.229 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:26.229 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.229 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:26.229 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.229 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:26.229 [2024-10-30 14:19:24.316189] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:26.229 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.229 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1267465 00:32:26.229 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:26.229 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:26.229 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1267465 /var/tmp/bdevperf.sock 00:32:26.229 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1267465 ']' 00:32:26.229 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:26.229 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:26.229 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:26.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:26.229 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:26.229 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:26.229 [2024-10-30 14:19:24.385155] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:32:26.229 [2024-10-30 14:19:24.385222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1267465 ] 00:32:26.229 [2024-10-30 14:19:24.478416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.490 [2024-10-30 14:19:24.533162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.063 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:27.063 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:32:27.063 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:27.063 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.063 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.322 NVMe0n1 00:32:27.322 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.322 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:27.322 Running I/O for 10 seconds... 00:32:29.653 8711.00 IOPS, 34.03 MiB/s [2024-10-30T13:19:28.893Z] 8993.50 IOPS, 35.13 MiB/s [2024-10-30T13:19:29.834Z] 9561.33 IOPS, 37.35 MiB/s [2024-10-30T13:19:30.775Z] 10653.75 IOPS, 41.62 MiB/s [2024-10-30T13:19:31.719Z] 11290.40 IOPS, 44.10 MiB/s [2024-10-30T13:19:32.658Z] 11770.33 IOPS, 45.98 MiB/s [2024-10-30T13:19:33.601Z] 12085.71 IOPS, 47.21 MiB/s [2024-10-30T13:19:34.543Z] 12328.88 IOPS, 48.16 MiB/s [2024-10-30T13:19:35.931Z] 12539.11 IOPS, 48.98 MiB/s [2024-10-30T13:19:35.931Z] 12708.80 IOPS, 49.64 MiB/s 00:32:37.632 Latency(us) 00:32:37.632 [2024-10-30T13:19:35.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.632 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:32:37.632 Verification LBA range: start 0x0 length 0x4000 00:32:37.632 NVMe0n1 : 10.06 12729.41 49.72 0.00 0.00 80156.98 24029.87 69031.25 00:32:37.632 [2024-10-30T13:19:35.931Z] =================================================================================================================== 00:32:37.632 [2024-10-30T13:19:35.931Z] Total : 12729.41 49.72 0.00 0.00 80156.98 24029.87 69031.25 00:32:37.632 { 00:32:37.632 "results": [ 00:32:37.632 { 00:32:37.632 "job": "NVMe0n1", 00:32:37.632 "core_mask": "0x1", 00:32:37.632 "workload": "verify", 00:32:37.632 "status": "finished", 00:32:37.632 "verify_range": { 00:32:37.632 "start": 0, 00:32:37.632 "length": 16384 00:32:37.632 }, 00:32:37.632 "queue_depth": 1024, 00:32:37.632 "io_size": 4096, 00:32:37.632 "runtime": 10.060167, 00:32:37.632 "iops": 12729.410953118373, 00:32:37.632 "mibps": 49.724261535618645, 00:32:37.632 "io_failed": 0, 00:32:37.632 "io_timeout": 0, 00:32:37.632 "avg_latency_us": 80156.9766038836, 00:32:37.632 "min_latency_us": 24029.866666666665, 00:32:37.632 "max_latency_us": 69031.25333333333 00:32:37.632 } 00:32:37.632 ], 00:32:37.632 "core_count": 1 00:32:37.632 } 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1267465 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1267465 ']' 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1267465 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1267465 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1267465' 00:32:37.632 killing process with pid 1267465 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1267465 00:32:37.632 Received shutdown signal, test time was about 10.000000 seconds 00:32:37.632 00:32:37.632 Latency(us) 00:32:37.632 [2024-10-30T13:19:35.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.632 [2024-10-30T13:19:35.931Z] =================================================================================================================== 00:32:37.632 [2024-10-30T13:19:35.931Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1267465 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:37.632 rmmod nvme_tcp 00:32:37.632 rmmod nvme_fabrics 00:32:37.632 rmmod nvme_keyring 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1267414 ']' 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1267414 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1267414 ']' 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1267414 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:37.632 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1267414 00:32:37.893 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:37.893 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:37.894 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1267414' 00:32:37.894 killing process with pid 1267414 00:32:37.894 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1267414 00:32:37.894 14:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1267414 00:32:37.894 14:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:37.894 14:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:37.894 14:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:37.894 14:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:32:37.894 14:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:32:37.894 14:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:37.894 14:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:32:37.894 14:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:37.894 14:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:37.894 14:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.894 14:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:37.894 14:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:40.437 00:32:40.437 real 0m22.533s 00:32:40.437 user 0m24.886s 00:32:40.437 sys 0m7.392s 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:40.437 ************************************ 00:32:40.437 END TEST nvmf_queue_depth 00:32:40.437 ************************************ 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:40.437 ************************************ 00:32:40.437 START TEST nvmf_target_multipath 00:32:40.437 ************************************ 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:40.437 * Looking for test storage... 00:32:40.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:40.437 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:40.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.438 --rc genhtml_branch_coverage=1 00:32:40.438 --rc genhtml_function_coverage=1 00:32:40.438 --rc genhtml_legend=1 00:32:40.438 --rc geninfo_all_blocks=1 00:32:40.438 --rc geninfo_unexecuted_blocks=1 00:32:40.438 00:32:40.438 ' 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:40.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.438 --rc genhtml_branch_coverage=1 00:32:40.438 --rc genhtml_function_coverage=1 00:32:40.438 --rc genhtml_legend=1 00:32:40.438 --rc geninfo_all_blocks=1 00:32:40.438 --rc geninfo_unexecuted_blocks=1 00:32:40.438 00:32:40.438 ' 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:40.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.438 --rc genhtml_branch_coverage=1 00:32:40.438 --rc genhtml_function_coverage=1 00:32:40.438 --rc genhtml_legend=1 00:32:40.438 --rc geninfo_all_blocks=1 00:32:40.438 --rc geninfo_unexecuted_blocks=1 00:32:40.438 00:32:40.438 ' 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:40.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.438 --rc genhtml_branch_coverage=1 00:32:40.438 --rc genhtml_function_coverage=1 00:32:40.438 --rc genhtml_legend=1 00:32:40.438 --rc geninfo_all_blocks=1 00:32:40.438 --rc geninfo_unexecuted_blocks=1 00:32:40.438 00:32:40.438 ' 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:32:40.438 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:48.579 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:48.579 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:48.579 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:48.579 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:48.579 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:48.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:48.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:32:48.580 00:32:48.580 --- 10.0.0.2 ping statistics --- 00:32:48.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.580 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:32:48.580 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:48.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:48.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:32:48.580 00:32:48.580 --- 10.0.0.1 ping statistics --- 00:32:48.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.580 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:32:48.580 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:48.580 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:32:48.580 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:48.580 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:48.580 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:48.580 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:48.580 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:48.580 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:48.580 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:48.580 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:32:48.580 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:32:48.580 only one NIC for nvmf test 00:32:48.580 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:32:48.580 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:48.580 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:48.580 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:48.580 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:48.580 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:48.580 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:48.580 rmmod nvme_tcp 00:32:48.580 rmmod nvme_fabrics 00:32:48.580 rmmod nvme_keyring 00:32:48.580 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:48.580 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:48.580 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:48.580 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:48.580 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:48.580 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:48.580 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:48.580 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:48.580 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:48.580 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:48.580 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:48.580 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:48.580 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:48.580 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.580 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:48.580 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:50.016 00:32:50.016 real 0m9.962s 00:32:50.016 user 0m2.156s 00:32:50.016 sys 0m5.758s 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:50.016 ************************************ 00:32:50.016 END TEST nvmf_target_multipath 00:32:50.016 ************************************ 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:50.016 ************************************ 00:32:50.016 START TEST nvmf_zcopy 00:32:50.016 ************************************ 00:32:50.016 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:50.326 * Looking for test storage... 00:32:50.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:50.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.326 --rc genhtml_branch_coverage=1 00:32:50.326 --rc genhtml_function_coverage=1 00:32:50.326 --rc genhtml_legend=1 00:32:50.326 --rc geninfo_all_blocks=1 00:32:50.326 --rc geninfo_unexecuted_blocks=1 00:32:50.326 00:32:50.326 ' 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:50.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.326 --rc genhtml_branch_coverage=1 00:32:50.326 --rc genhtml_function_coverage=1 00:32:50.326 --rc genhtml_legend=1 00:32:50.326 --rc geninfo_all_blocks=1 00:32:50.326 --rc geninfo_unexecuted_blocks=1 00:32:50.326 00:32:50.326 ' 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:50.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.326 --rc genhtml_branch_coverage=1 00:32:50.326 --rc genhtml_function_coverage=1 00:32:50.326 --rc genhtml_legend=1 00:32:50.326 --rc geninfo_all_blocks=1 00:32:50.326 --rc geninfo_unexecuted_blocks=1 00:32:50.326 00:32:50.326 ' 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:50.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.326 --rc genhtml_branch_coverage=1 00:32:50.326 --rc genhtml_function_coverage=1 00:32:50.326 --rc genhtml_legend=1 00:32:50.326 --rc geninfo_all_blocks=1 00:32:50.326 --rc geninfo_unexecuted_blocks=1 00:32:50.326 00:32:50.326 ' 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:50.326 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:32:50.327 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:58.589 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:58.589 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:32:58.589 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:58.589 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:58.589 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:58.589 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:58.589 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:58.589 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:32:58.589 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:58.589 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:32:58.589 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:32:58.589 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:32:58.589 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:58.590 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:58.590 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:58.590 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:58.590 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:58.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:58.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:32:58.590 00:32:58.590 --- 10.0.0.2 ping statistics --- 00:32:58.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:58.590 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:58.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:58.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:32:58.590 00:32:58.590 --- 10.0.0.1 ping statistics --- 00:32:58.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:58.590 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:58.590 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:58.591 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1277962 00:32:58.591 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1277962 00:32:58.591 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:58.591 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1277962 ']' 00:32:58.591 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:58.591 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:58.591 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:58.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:58.591 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:58.591 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:58.591 [2024-10-30 14:19:55.878194] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:58.591 [2024-10-30 14:19:55.879177] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:32:58.591 [2024-10-30 14:19:55.879215] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:58.591 [2024-10-30 14:19:55.974123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.591 [2024-10-30 14:19:56.008881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:58.591 [2024-10-30 14:19:56.008913] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:58.591 [2024-10-30 14:19:56.008921] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:58.591 [2024-10-30 14:19:56.008928] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:58.591 [2024-10-30 14:19:56.008934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:58.591 [2024-10-30 14:19:56.009464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:58.591 [2024-10-30 14:19:56.064244] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:58.591 [2024-10-30 14:19:56.064501] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:58.591 [2024-10-30 14:19:56.714239] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:58.591 [2024-10-30 14:19:56.742473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:58.591 malloc0 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:58.591 { 00:32:58.591 "params": { 00:32:58.591 "name": "Nvme$subsystem", 00:32:58.591 "trtype": "$TEST_TRANSPORT", 00:32:58.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:58.591 "adrfam": "ipv4", 00:32:58.591 "trsvcid": "$NVMF_PORT", 00:32:58.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:58.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:58.591 "hdgst": ${hdgst:-false}, 00:32:58.591 "ddgst": ${ddgst:-false} 00:32:58.591 }, 00:32:58.591 "method": "bdev_nvme_attach_controller" 00:32:58.591 } 00:32:58.591 EOF 00:32:58.591 )") 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:58.591 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:58.591 "params": { 00:32:58.591 "name": "Nvme1", 00:32:58.591 "trtype": "tcp", 00:32:58.591 "traddr": "10.0.0.2", 00:32:58.591 "adrfam": "ipv4", 00:32:58.591 "trsvcid": "4420", 00:32:58.591 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:58.591 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:58.591 "hdgst": false, 00:32:58.591 "ddgst": false 00:32:58.591 }, 00:32:58.591 "method": "bdev_nvme_attach_controller" 00:32:58.591 }' 00:32:58.591 [2024-10-30 14:19:56.845821] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:32:58.591 [2024-10-30 14:19:56.845897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1278131 ] 00:32:58.853 [2024-10-30 14:19:56.940328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.853 [2024-10-30 14:19:56.992692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.115 Running I/O for 10 seconds... 00:33:01.446 6461.00 IOPS, 50.48 MiB/s [2024-10-30T13:20:00.685Z] 6492.50 IOPS, 50.72 MiB/s [2024-10-30T13:20:01.629Z] 6482.67 IOPS, 50.65 MiB/s [2024-10-30T13:20:02.573Z] 6618.00 IOPS, 51.70 MiB/s [2024-10-30T13:20:03.521Z] 7202.00 IOPS, 56.27 MiB/s [2024-10-30T13:20:04.464Z] 7589.17 IOPS, 59.29 MiB/s [2024-10-30T13:20:05.406Z] 7863.43 IOPS, 61.43 MiB/s [2024-10-30T13:20:06.348Z] 8068.75 IOPS, 63.04 MiB/s [2024-10-30T13:20:07.733Z] 8228.78 IOPS, 64.29 MiB/s [2024-10-30T13:20:07.733Z] 8355.10 IOPS, 65.27 MiB/s 00:33:09.434 Latency(us) 00:33:09.434 [2024-10-30T13:20:07.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:09.434 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:09.434 Verification LBA range: start 0x0 length 0x1000 00:33:09.434 Nvme1n1 : 10.01 8360.14 65.31 0.00 0.00 15265.55 1392.64 28835.84 00:33:09.434 [2024-10-30T13:20:07.733Z] =================================================================================================================== 00:33:09.434 [2024-10-30T13:20:07.733Z] Total : 8360.14 65.31 0.00 0.00 15265.55 1392.64 28835.84 00:33:09.434 14:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1280141 00:33:09.434 14:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:09.434 14:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:09.434 14:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:09.434 14:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:09.434 14:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:09.434 14:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:09.434 14:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:09.434 14:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:09.434 { 00:33:09.434 "params": { 00:33:09.434 "name": "Nvme$subsystem", 00:33:09.434 "trtype": "$TEST_TRANSPORT", 00:33:09.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:09.434 "adrfam": "ipv4", 00:33:09.434 "trsvcid": "$NVMF_PORT", 00:33:09.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:09.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:09.434 "hdgst": ${hdgst:-false}, 00:33:09.434 "ddgst": ${ddgst:-false} 00:33:09.434 }, 00:33:09.434 "method": "bdev_nvme_attach_controller" 00:33:09.434 } 00:33:09.434 EOF 00:33:09.434 )") 00:33:09.434 14:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:09.434 [2024-10-30 14:20:07.461782] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.434 [2024-10-30 14:20:07.461814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.434 14:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:09.434 14:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:09.434 14:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:09.434 "params": { 00:33:09.434 "name": "Nvme1", 00:33:09.434 "trtype": "tcp", 00:33:09.434 "traddr": "10.0.0.2", 00:33:09.434 "adrfam": "ipv4", 00:33:09.434 "trsvcid": "4420", 00:33:09.434 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:09.434 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:09.434 "hdgst": false, 00:33:09.434 "ddgst": false 00:33:09.434 }, 00:33:09.434 "method": "bdev_nvme_attach_controller" 00:33:09.434 }' 00:33:09.434 [2024-10-30 14:20:07.473753] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.434 [2024-10-30 14:20:07.473764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.434 [2024-10-30 14:20:07.485757] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.434 [2024-10-30 14:20:07.485765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.435 [2024-10-30 14:20:07.497750] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.435 [2024-10-30 14:20:07.497758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.435 [2024-10-30 14:20:07.502354] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:33:09.435 [2024-10-30 14:20:07.502402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1280141 ] 00:33:09.435 [2024-10-30 14:20:07.509750] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.435 [2024-10-30 14:20:07.509759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.435 [2024-10-30 14:20:07.521749] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.435 [2024-10-30 14:20:07.521757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.435 [2024-10-30 14:20:07.533750] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.435 [2024-10-30 14:20:07.533758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.435 [2024-10-30 14:20:07.545749] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.435 [2024-10-30 14:20:07.545757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.435 [2024-10-30 14:20:07.557749] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.435 [2024-10-30 14:20:07.557757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.435 [2024-10-30 14:20:07.569749] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.435 [2024-10-30 14:20:07.569757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.435 [2024-10-30 14:20:07.581750] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.435 [2024-10-30 14:20:07.581758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.435 [2024-10-30 14:20:07.586535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.435 [2024-10-30 14:20:07.593753] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.435 [2024-10-30 14:20:07.593763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.435 [2024-10-30 14:20:07.605750] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.435 [2024-10-30 14:20:07.605758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.435 [2024-10-30 14:20:07.615832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.435 [2024-10-30 14:20:07.617750] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.435 [2024-10-30 14:20:07.617763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.435 [2024-10-30 14:20:07.629753] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.435 [2024-10-30 14:20:07.629763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.435 [2024-10-30 14:20:07.641752] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.435 [2024-10-30 14:20:07.641764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.435 [2024-10-30 14:20:07.653750] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.435 [2024-10-30 14:20:07.653761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.435 [2024-10-30 14:20:07.665750] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.435 [2024-10-30 14:20:07.665759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.435 [2024-10-30 14:20:07.677750] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.435 [2024-10-30 14:20:07.677757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.435 [2024-10-30 14:20:07.689758] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.435 [2024-10-30 14:20:07.689774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.435 [2024-10-30 14:20:07.701752] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.435 [2024-10-30 14:20:07.701762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.435 [2024-10-30 14:20:07.713752] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.435 [2024-10-30 14:20:07.713763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.435 [2024-10-30 14:20:07.725753] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.435 [2024-10-30 14:20:07.725765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.696 [2024-10-30 14:20:07.737758] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.696 [2024-10-30 14:20:07.737773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.696 Running I/O for 5 seconds... 00:33:09.696 [2024-10-30 14:20:07.749753] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.696 [2024-10-30 14:20:07.749767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.696 [2024-10-30 14:20:07.765601] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.696 [2024-10-30 14:20:07.765617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.696 [2024-10-30 14:20:07.778390] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.696 [2024-10-30 14:20:07.778406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.696 [2024-10-30 14:20:07.793181] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.696 [2024-10-30 14:20:07.793197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.696 [2024-10-30 14:20:07.806048] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.696 [2024-10-30 14:20:07.806064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.696 [2024-10-30 14:20:07.820914] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.696 [2024-10-30 14:20:07.820929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.696 [2024-10-30 14:20:07.834121] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.696 [2024-10-30 14:20:07.834137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.696 [2024-10-30 14:20:07.848765] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.696 [2024-10-30 14:20:07.848781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.696 [2024-10-30 14:20:07.861812] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.696 [2024-10-30 14:20:07.861834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.696 [2024-10-30 14:20:07.874230] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.696 [2024-10-30 14:20:07.874245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.696 [2024-10-30 14:20:07.888703] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.696 [2024-10-30 14:20:07.888718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.696 [2024-10-30 14:20:07.901507] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.696 [2024-10-30 14:20:07.901522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.696 [2024-10-30 14:20:07.913766] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.696 [2024-10-30 14:20:07.913781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.696 [2024-10-30 14:20:07.926478] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.696 [2024-10-30 14:20:07.926493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.696 [2024-10-30 14:20:07.940370] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.696 [2024-10-30 14:20:07.940385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.696 [2024-10-30 14:20:07.952981] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.696 [2024-10-30 14:20:07.952996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.696 [2024-10-30 14:20:07.965843] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.696 [2024-10-30 14:20:07.965859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.696 [2024-10-30 14:20:07.977839] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.696 [2024-10-30 14:20:07.977854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.696 [2024-10-30 14:20:07.990617] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.696 [2024-10-30 14:20:07.990632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-10-30 14:20:08.005136] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-10-30 14:20:08.005153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-10-30 14:20:08.018131] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-10-30 14:20:08.018147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-10-30 14:20:08.032658] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-10-30 14:20:08.032674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-10-30 14:20:08.045694] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-10-30 14:20:08.045710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-10-30 14:20:08.058207] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-10-30 14:20:08.058222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-10-30 14:20:08.072735] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-10-30 14:20:08.072755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-10-30 14:20:08.085892] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-10-30 14:20:08.085908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-10-30 14:20:08.097530] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-10-30 14:20:08.097545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-10-30 14:20:08.110126] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-10-30 14:20:08.110144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-10-30 14:20:08.124828] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-10-30 14:20:08.124844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-10-30 14:20:08.137667] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-10-30 14:20:08.137682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-10-30 14:20:08.150261] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-10-30 14:20:08.150277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-10-30 14:20:08.165094] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-10-30 14:20:08.165110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-10-30 14:20:08.178096] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-10-30 14:20:08.178111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-10-30 14:20:08.192805] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-10-30 14:20:08.192821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-10-30 14:20:08.205572] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-10-30 14:20:08.205587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-10-30 14:20:08.217951] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-10-30 14:20:08.217966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-10-30 14:20:08.230188] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-10-30 14:20:08.230203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-10-30 14:20:08.244936] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-10-30 14:20:08.244952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-10-30 14:20:08.257331] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-10-30 14:20:08.257347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-10-30 14:20:08.269622] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-10-30 14:20:08.269638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-10-30 14:20:08.281594] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-10-30 14:20:08.281610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-10-30 14:20:08.294464] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-10-30 14:20:08.294480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-10-30 14:20:08.308856] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-10-30 14:20:08.308872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-10-30 14:20:08.321722] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-10-30 14:20:08.321738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-10-30 14:20:08.334910] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-10-30 14:20:08.334925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-10-30 14:20:08.348821] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.217 [2024-10-30 14:20:08.348836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.217 [2024-10-30 14:20:08.361442] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.217 [2024-10-30 14:20:08.361458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.217 [2024-10-30 14:20:08.374145] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.217 [2024-10-30 14:20:08.374160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.217 [2024-10-30 14:20:08.389137] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.217 [2024-10-30 14:20:08.389153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.217 [2024-10-30 14:20:08.401944] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.217 [2024-10-30 14:20:08.401960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.217 [2024-10-30 14:20:08.414043] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.217 [2024-10-30 14:20:08.414058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.217 [2024-10-30 14:20:08.428431] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.217 [2024-10-30 14:20:08.428447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.217 [2024-10-30 14:20:08.441612] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.217 [2024-10-30 14:20:08.441628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.217 [2024-10-30 14:20:08.454499] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.217 [2024-10-30 14:20:08.454514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.217 [2024-10-30 14:20:08.468625] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.217 [2024-10-30 14:20:08.468640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.217 [2024-10-30 14:20:08.481293] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.217 [2024-10-30 14:20:08.481309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.217 [2024-10-30 14:20:08.494075] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.217 [2024-10-30 14:20:08.494090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.217 [2024-10-30 14:20:08.508555] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.217 [2024-10-30 14:20:08.508572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.477 [2024-10-30 14:20:08.521602] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.477 [2024-10-30 14:20:08.521618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.477 [2024-10-30 14:20:08.534178] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.477 [2024-10-30 14:20:08.534193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.477 [2024-10-30 14:20:08.549036] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.477 [2024-10-30 14:20:08.549052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.477 [2024-10-30 14:20:08.562018] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.477 [2024-10-30 14:20:08.562032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.477 [2024-10-30 14:20:08.576981] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.477 [2024-10-30 14:20:08.576996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.477 [2024-10-30 14:20:08.589742] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.477 [2024-10-30 14:20:08.589848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.477 [2024-10-30 14:20:08.602640] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.477 [2024-10-30 14:20:08.602655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.477 [2024-10-30 14:20:08.617409] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.477 [2024-10-30 14:20:08.617425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.477 [2024-10-30 14:20:08.629436] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.477 [2024-10-30 14:20:08.629452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.477 [2024-10-30 14:20:08.641978] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.477 [2024-10-30 14:20:08.641993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.477 [2024-10-30 14:20:08.656826] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.477 [2024-10-30 14:20:08.656841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.477 [2024-10-30 14:20:08.669983] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.477 [2024-10-30 14:20:08.669998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.478 [2024-10-30 14:20:08.682620] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.478 [2024-10-30 14:20:08.682635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.478 [2024-10-30 14:20:08.697412] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.478 [2024-10-30 14:20:08.697428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.478 [2024-10-30 14:20:08.710324] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.478 [2024-10-30 14:20:08.710339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.478 [2024-10-30 14:20:08.725254] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.478 [2024-10-30 14:20:08.725269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.478 [2024-10-30 14:20:08.738537] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.478 [2024-10-30 14:20:08.738552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.478 18940.00 IOPS, 147.97 MiB/s [2024-10-30T13:20:08.777Z] [2024-10-30 14:20:08.753195] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.478 [2024-10-30 14:20:08.753211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.478 [2024-10-30 14:20:08.765672] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.478 [2024-10-30 14:20:08.765687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.739 [2024-10-30 14:20:08.777935] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.739 [2024-10-30 14:20:08.777950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.739 [2024-10-30 14:20:08.792763] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.739 [2024-10-30 14:20:08.792779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.739 [2024-10-30 14:20:08.805650] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.739 [2024-10-30 14:20:08.805666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.739 [2024-10-30 14:20:08.817985] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.739 [2024-10-30 14:20:08.817999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.739 [2024-10-30 14:20:08.832809] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.739 [2024-10-30 14:20:08.832825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.739 [2024-10-30 14:20:08.845657] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.739 [2024-10-30 14:20:08.845672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.739 [2024-10-30 14:20:08.857712] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.739 [2024-10-30 14:20:08.857727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.739 [2024-10-30 14:20:08.870155] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.739 [2024-10-30 14:20:08.870169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.739 [2024-10-30 14:20:08.885079] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.739 [2024-10-30 14:20:08.885094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.739 [2024-10-30 14:20:08.897805] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.739 [2024-10-30 14:20:08.897820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.739 [2024-10-30 14:20:08.910853] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.739 [2024-10-30 14:20:08.910868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.739 [2024-10-30 14:20:08.924606] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.739 [2024-10-30 14:20:08.924621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.739 [2024-10-30 14:20:08.937625] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.739 [2024-10-30 14:20:08.937641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.739 [2024-10-30 14:20:08.949969] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.739 [2024-10-30 14:20:08.949983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.739 [2024-10-30 14:20:08.964730] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.739 [2024-10-30 14:20:08.964749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.739 [2024-10-30 14:20:08.977294] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.739 [2024-10-30 14:20:08.977310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.739 [2024-10-30 14:20:08.989775] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.739 [2024-10-30 14:20:08.989790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.739 [2024-10-30 14:20:09.001832] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.739 [2024-10-30 14:20:09.001848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.739 [2024-10-30 14:20:09.014292] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.739 [2024-10-30 14:20:09.014308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.739 [2024-10-30 14:20:09.029172] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.739 [2024-10-30 14:20:09.029188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.999 [2024-10-30 14:20:09.042201] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.999 [2024-10-30 14:20:09.042215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.999 [2024-10-30 14:20:09.057167] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.999 [2024-10-30 14:20:09.057182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.999 [2024-10-30 14:20:09.070156] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.999 [2024-10-30 14:20:09.070170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.999 [2024-10-30 14:20:09.085234] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.999 [2024-10-30 14:20:09.085250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.999 [2024-10-30 14:20:09.097563] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.999 [2024-10-30 14:20:09.097578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.999 [2024-10-30 14:20:09.110143] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.999 [2024-10-30 14:20:09.110162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.999 [2024-10-30 14:20:09.124773] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.999 [2024-10-30 14:20:09.124788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.999 [2024-10-30 14:20:09.137749] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.999 [2024-10-30 14:20:09.137764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.999 [2024-10-30 14:20:09.149744] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.999 [2024-10-30 14:20:09.149762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.999 [2024-10-30 14:20:09.162272] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.999 [2024-10-30 14:20:09.162286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.999 [2024-10-30 14:20:09.177217] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.999 [2024-10-30 14:20:09.177232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.999 [2024-10-30 14:20:09.189587] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.999 [2024-10-30 14:20:09.189602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.999 [2024-10-30 14:20:09.202169] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.999 [2024-10-30 14:20:09.202184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.999 [2024-10-30 14:20:09.216941] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.999 [2024-10-30 14:20:09.216956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.999 [2024-10-30 14:20:09.230173] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.999 [2024-10-30 14:20:09.230187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.999 [2024-10-30 14:20:09.245093] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.999 [2024-10-30 14:20:09.245108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.999 [2024-10-30 14:20:09.258139] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.999 [2024-10-30 14:20:09.258153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.999 [2024-10-30 14:20:09.272753] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.999 [2024-10-30 14:20:09.272768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.999 [2024-10-30 14:20:09.285659] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.999 [2024-10-30 14:20:09.285674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.999 [2024-10-30 14:20:09.297534] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.999 [2024-10-30 14:20:09.297548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.262 [2024-10-30 14:20:09.310250] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.262 [2024-10-30 14:20:09.310265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.263 [2024-10-30 14:20:09.324911] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.263 [2024-10-30 14:20:09.324926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.263 [2024-10-30 14:20:09.337999] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.263 [2024-10-30 14:20:09.338013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.263 [2024-10-30 14:20:09.352718] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.263 [2024-10-30 14:20:09.352734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.263 [2024-10-30 14:20:09.365710] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.263 [2024-10-30 14:20:09.365729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.263 [2024-10-30 14:20:09.378187] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.263 [2024-10-30 14:20:09.378202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.263 [2024-10-30 14:20:09.392578] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.263 [2024-10-30 14:20:09.392593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.263 [2024-10-30 14:20:09.405893] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.263 [2024-10-30 14:20:09.405908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.263 [2024-10-30 14:20:09.418526] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.263 [2024-10-30 14:20:09.418540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.263 [2024-10-30 14:20:09.433425] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.263 [2024-10-30 14:20:09.433440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.263 [2024-10-30 14:20:09.446023] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.263 [2024-10-30 14:20:09.446037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.263 [2024-10-30 14:20:09.461389] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.263 [2024-10-30 14:20:09.461404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.263 [2024-10-30 14:20:09.473959] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.263 [2024-10-30 14:20:09.473974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.263 [2024-10-30 14:20:09.488953] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.263 [2024-10-30 14:20:09.488968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.263 [2024-10-30 14:20:09.501817] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.263 [2024-10-30 14:20:09.501832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.263 [2024-10-30 14:20:09.514033] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.263 [2024-10-30 14:20:09.514047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.263 [2024-10-30 14:20:09.529270] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.263 [2024-10-30 14:20:09.529285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.263 [2024-10-30 14:20:09.542077] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.263 [2024-10-30 14:20:09.542091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.263 [2024-10-30 14:20:09.556753] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.263 [2024-10-30 14:20:09.556768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.525 [2024-10-30 14:20:09.569824] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.525 [2024-10-30 14:20:09.569840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.526 [2024-10-30 14:20:09.582324] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.526 [2024-10-30 14:20:09.582338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.526 [2024-10-30 14:20:09.596890] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.526 [2024-10-30 14:20:09.596905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.526 [2024-10-30 14:20:09.609961] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.526 [2024-10-30 14:20:09.609976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.526 [2024-10-30 14:20:09.624911] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.526 [2024-10-30 14:20:09.624930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.526 [2024-10-30 14:20:09.637906] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.526 [2024-10-30 14:20:09.637920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.526 [2024-10-30 14:20:09.652935] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.526 [2024-10-30 14:20:09.652950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.526 [2024-10-30 14:20:09.665769] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.526 [2024-10-30 14:20:09.665784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.526 [2024-10-30 14:20:09.678287] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.526 [2024-10-30 14:20:09.678301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.526 [2024-10-30 14:20:09.692502] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.526 [2024-10-30 14:20:09.692517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.526 [2024-10-30 14:20:09.705569] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.526 [2024-10-30 14:20:09.705584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.526 [2024-10-30 14:20:09.718239] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.526 [2024-10-30 14:20:09.718254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.526 [2024-10-30 14:20:09.733208] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.526 [2024-10-30 14:20:09.733223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.526 [2024-10-30 14:20:09.746275] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.526 [2024-10-30 14:20:09.746289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.526 18962.00 IOPS, 148.14 MiB/s [2024-10-30T13:20:09.825Z] [2024-10-30 14:20:09.760849] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.526 [2024-10-30 14:20:09.760864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.526 [2024-10-30 14:20:09.774154] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.526 [2024-10-30 14:20:09.774169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.526 [2024-10-30 14:20:09.789336] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.526 [2024-10-30 14:20:09.789351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.526 [2024-10-30 14:20:09.801989] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.526 [2024-10-30 14:20:09.802004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.526 [2024-10-30 14:20:09.816740] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.526 [2024-10-30 14:20:09.816760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.787 [2024-10-30 14:20:09.829534] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.787 [2024-10-30 14:20:09.829550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.787 [2024-10-30 14:20:09.842523] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.787 [2024-10-30 14:20:09.842538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.787 [2024-10-30 14:20:09.857280] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.787 [2024-10-30 14:20:09.857296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.787 [2024-10-30 14:20:09.870368] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.787 [2024-10-30 14:20:09.870383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.787 [2024-10-30 14:20:09.884859] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.787 [2024-10-30 14:20:09.884874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.787 [2024-10-30 14:20:09.898113] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.787 [2024-10-30 14:20:09.898128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.787 [2024-10-30 14:20:09.912450] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.787 [2024-10-30 14:20:09.912466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.787 [2024-10-30 14:20:09.925550] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.787 [2024-10-30 14:20:09.925565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.787 [2024-10-30 14:20:09.937863] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.787 [2024-10-30 14:20:09.937878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.787 [2024-10-30 14:20:09.950378] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.787 [2024-10-30 14:20:09.950393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.787 [2024-10-30 14:20:09.965438] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.787 [2024-10-30 14:20:09.965453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.787 [2024-10-30 14:20:09.978344] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.787 [2024-10-30 14:20:09.978359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.787 [2024-10-30 14:20:09.992974] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.787 [2024-10-30 14:20:09.992990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.787 [2024-10-30 14:20:10.006880] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.787 [2024-10-30 14:20:10.006896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.787 [2024-10-30 14:20:10.021200] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.787 [2024-10-30 14:20:10.021216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.788 [2024-10-30 14:20:10.033891] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.788 [2024-10-30 14:20:10.033906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.788 [2024-10-30 14:20:10.049103] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.788 [2024-10-30 14:20:10.049119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.788 [2024-10-30 14:20:10.061769] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.788 [2024-10-30 14:20:10.061785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.788 [2024-10-30 14:20:10.074587] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.788 [2024-10-30 14:20:10.074603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.049 [2024-10-30 14:20:10.089105] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.049 [2024-10-30 14:20:10.089120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.049 [2024-10-30 14:20:10.102330] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.049 [2024-10-30 14:20:10.102344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.049 [2024-10-30 14:20:10.117056] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.049 [2024-10-30 14:20:10.117071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.049 [2024-10-30 14:20:10.130238] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.049 [2024-10-30 14:20:10.130253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.049 [2024-10-30 14:20:10.144874] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.049 [2024-10-30 14:20:10.144890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.049 [2024-10-30 14:20:10.157767] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.049 [2024-10-30 14:20:10.157782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.049 [2024-10-30 14:20:10.169915] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.049 [2024-10-30 14:20:10.169931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.049 [2024-10-30 14:20:10.182100] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.049 [2024-10-30 14:20:10.182115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.049 [2024-10-30 14:20:10.197045] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.049 [2024-10-30 14:20:10.197060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.049 [2024-10-30 14:20:10.209784] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.049 [2024-10-30 14:20:10.209801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.049 [2024-10-30 14:20:10.222474] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.049 [2024-10-30 14:20:10.222488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.049 [2024-10-30 14:20:10.236847] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.049 [2024-10-30 14:20:10.236863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.049 [2024-10-30 14:20:10.249665] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.049 [2024-10-30 14:20:10.249680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.049 [2024-10-30 14:20:10.261327] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.049 [2024-10-30 14:20:10.261343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.049 [2024-10-30 14:20:10.273765] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.049 [2024-10-30 14:20:10.273780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.049 [2024-10-30 14:20:10.285484] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.049 [2024-10-30 14:20:10.285500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.049 [2024-10-30 14:20:10.298361] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.049 [2024-10-30 14:20:10.298376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.049 [2024-10-30 14:20:10.313047] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.049 [2024-10-30 14:20:10.313062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.049 [2024-10-30 14:20:10.325785] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.049 [2024-10-30 14:20:10.325801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.049 [2024-10-30 14:20:10.337817] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.049 [2024-10-30 14:20:10.337832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.311 [2024-10-30 14:20:10.350118] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.311 [2024-10-30 14:20:10.350133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.311 [2024-10-30 14:20:10.364570] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.311 [2024-10-30 14:20:10.364586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.311 [2024-10-30 14:20:10.377808] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.311 [2024-10-30 14:20:10.377823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.311 [2024-10-30 14:20:10.390513] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.311 [2024-10-30 14:20:10.390529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.311 [2024-10-30 14:20:10.404983] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.311 [2024-10-30 14:20:10.404998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.311 [2024-10-30 14:20:10.417527] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.311 [2024-10-30 14:20:10.417542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.311 [2024-10-30 14:20:10.430147] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.311 [2024-10-30 14:20:10.430162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.311 [2024-10-30 14:20:10.444816] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.311 [2024-10-30 14:20:10.444832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.311 [2024-10-30 14:20:10.457750] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.311 [2024-10-30 14:20:10.457766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.311 [2024-10-30 14:20:10.469782] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.311 [2024-10-30 14:20:10.469797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.311 [2024-10-30 14:20:10.482471] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.311 [2024-10-30 14:20:10.482487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.311 [2024-10-30 14:20:10.497105] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.311 [2024-10-30 14:20:10.497120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.311 [2024-10-30 14:20:10.509995] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.311 [2024-10-30 14:20:10.510010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.311 [2024-10-30 14:20:10.525112] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.311 [2024-10-30 14:20:10.525127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.311 [2024-10-30 14:20:10.537714] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.311 [2024-10-30 14:20:10.537729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.311 [2024-10-30 14:20:10.549443] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.311 [2024-10-30 14:20:10.549459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.311 [2024-10-30 14:20:10.562533] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.311 [2024-10-30 14:20:10.562549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.311 [2024-10-30 14:20:10.576495] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.311 [2024-10-30 14:20:10.576510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.311 [2024-10-30 14:20:10.589646] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.311 [2024-10-30 14:20:10.589661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.311 [2024-10-30 14:20:10.601860] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.311 [2024-10-30 14:20:10.601876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.572 [2024-10-30 14:20:10.614340] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.572 [2024-10-30 14:20:10.614355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.572 [2024-10-30 14:20:10.628943] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.572 [2024-10-30 14:20:10.628959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.572 [2024-10-30 14:20:10.641560] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.572 [2024-10-30 14:20:10.641575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.572 [2024-10-30 14:20:10.653702] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.572 [2024-10-30 14:20:10.653717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.572 [2024-10-30 14:20:10.665838] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.572 [2024-10-30 14:20:10.665853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.572 [2024-10-30 14:20:10.678152] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.572 [2024-10-30 14:20:10.678166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.572 [2024-10-30 14:20:10.693005] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.572 [2024-10-30 14:20:10.693020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.572 [2024-10-30 14:20:10.705737] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.572 [2024-10-30 14:20:10.705758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.572 [2024-10-30 14:20:10.718370] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.573 [2024-10-30 14:20:10.718385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.573 [2024-10-30 14:20:10.733240] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.573 [2024-10-30 14:20:10.733255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.573 [2024-10-30 14:20:10.746053] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.573 [2024-10-30 14:20:10.746068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.573 18961.00 IOPS, 148.13 MiB/s [2024-10-30T13:20:10.872Z] [2024-10-30 14:20:10.760871] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.573 [2024-10-30 14:20:10.760887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.573 [2024-10-30 14:20:10.774093] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.573 [2024-10-30 14:20:10.774108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.573 [2024-10-30 14:20:10.789482] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.573 [2024-10-30 14:20:10.789498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.573 [2024-10-30 14:20:10.802153] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.573 [2024-10-30 14:20:10.802168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.573 [2024-10-30 14:20:10.817126] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.573 [2024-10-30 14:20:10.817142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.573 [2024-10-30 14:20:10.830081] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.573 [2024-10-30 14:20:10.830095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.573 [2024-10-30 14:20:10.844936] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.573 [2024-10-30 14:20:10.844952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.573 [2024-10-30 14:20:10.858036] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.573 [2024-10-30 14:20:10.858051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.835 [2024-10-30 14:20:10.873325] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.835 [2024-10-30 14:20:10.873340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.835 [2024-10-30 14:20:10.886350] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.835 [2024-10-30 14:20:10.886368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.835 [2024-10-30 14:20:10.900958] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.835 [2024-10-30 14:20:10.900973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.835 [2024-10-30 14:20:10.913949] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.835 [2024-10-30 14:20:10.913964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.835 [2024-10-30 14:20:10.928495] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.835 [2024-10-30 14:20:10.928509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.835 [2024-10-30 14:20:10.941398] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.835 [2024-10-30 14:20:10.941413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.835 [2024-10-30 14:20:10.953798] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.835 [2024-10-30 14:20:10.953813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.835 [2024-10-30 14:20:10.966765] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.835 [2024-10-30 14:20:10.966780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.835 [2024-10-30 14:20:10.981008] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.835 [2024-10-30 14:20:10.981022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.835 [2024-10-30 14:20:10.994005] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.835 [2024-10-30 14:20:10.994019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.835 [2024-10-30 14:20:11.009521] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.835 [2024-10-30 14:20:11.009535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.835 [2024-10-30 14:20:11.022208] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.835 [2024-10-30 14:20:11.022223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.835 [2024-10-30 14:20:11.037045] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.835 [2024-10-30 14:20:11.037059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.835 [2024-10-30 14:20:11.049921] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.835 [2024-10-30 14:20:11.049936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.835 [2024-10-30 14:20:11.061703] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.835 [2024-10-30 14:20:11.061717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.835 [2024-10-30 14:20:11.074315] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.835 [2024-10-30 14:20:11.074330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.835 [2024-10-30 14:20:11.089318] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.835 [2024-10-30 14:20:11.089333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.835 [2024-10-30 14:20:11.102136] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.835 [2024-10-30 14:20:11.102151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.835 [2024-10-30 14:20:11.117045] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.835 [2024-10-30 14:20:11.117061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.835 [2024-10-30 14:20:11.129735] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.835 [2024-10-30 14:20:11.129754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.097 [2024-10-30 14:20:11.141704] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.097 [2024-10-30 14:20:11.141722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.097 [2024-10-30 14:20:11.154424] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.097 [2024-10-30 14:20:11.154439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.097 [2024-10-30 14:20:11.168316] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.097 [2024-10-30 14:20:11.168331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.097 [2024-10-30 14:20:11.182079] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.097 [2024-10-30 14:20:11.182093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.097 [2024-10-30 14:20:11.197074] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.097 [2024-10-30 14:20:11.197089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.097 [2024-10-30 14:20:11.209828] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.097 [2024-10-30 14:20:11.209843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.097 [2024-10-30 14:20:11.222611] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.097 [2024-10-30 14:20:11.222626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.097 [2024-10-30 14:20:11.236924] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.097 [2024-10-30 14:20:11.236938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.097 [2024-10-30 14:20:11.249905] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.097 [2024-10-30 14:20:11.249919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.097 [2024-10-30 14:20:11.262296] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.097 [2024-10-30 14:20:11.262310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.097 [2024-10-30 14:20:11.276339] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.097 [2024-10-30 14:20:11.276355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.097 [2024-10-30 14:20:11.289453] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.097 [2024-10-30 14:20:11.289468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.097 [2024-10-30 14:20:11.301970] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.097 [2024-10-30 14:20:11.301985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.097 [2024-10-30 14:20:11.314640] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.097 [2024-10-30 14:20:11.314655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.097 [2024-10-30 14:20:11.329398] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.097 [2024-10-30 14:20:11.329413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.097 [2024-10-30 14:20:11.342045] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.097 [2024-10-30 14:20:11.342060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.097 [2024-10-30 14:20:11.356681] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.097 [2024-10-30 14:20:11.356696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.097 [2024-10-30 14:20:11.369373] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.097 [2024-10-30 14:20:11.369388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.097 [2024-10-30 14:20:11.382527] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.097 [2024-10-30 14:20:11.382541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.358 [2024-10-30 14:20:11.396884] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.358 [2024-10-30 14:20:11.396903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.358 [2024-10-30 14:20:11.410068] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.358 [2024-10-30 14:20:11.410082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.358 [2024-10-30 14:20:11.424692] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.358 [2024-10-30 14:20:11.424708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.358 [2024-10-30 14:20:11.437237] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.358 [2024-10-30 14:20:11.437252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.358 [2024-10-30 14:20:11.449919] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.358 [2024-10-30 14:20:11.449934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.358 [2024-10-30 14:20:11.462130] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.358 [2024-10-30 14:20:11.462144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.358 [2024-10-30 14:20:11.476930] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.358 [2024-10-30 14:20:11.476945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.359 [2024-10-30 14:20:11.489767] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.359 [2024-10-30 14:20:11.489782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.359 [2024-10-30 14:20:11.501589] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.359 [2024-10-30 14:20:11.501604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.359 [2024-10-30 14:20:11.514322] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.359 [2024-10-30 14:20:11.514336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.359 [2024-10-30 14:20:11.529319] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.359 [2024-10-30 14:20:11.529334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.359 [2024-10-30 14:20:11.541930] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.359 [2024-10-30 14:20:11.541945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.359 [2024-10-30 14:20:11.553995] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.359 [2024-10-30 14:20:11.554010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.359 [2024-10-30 14:20:11.568756] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.359 [2024-10-30 14:20:11.568772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.359 [2024-10-30 14:20:11.582040] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.359 [2024-10-30 14:20:11.582055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.359 [2024-10-30 14:20:11.597064] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.359 [2024-10-30 14:20:11.597079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.359 [2024-10-30 14:20:11.610320] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.359 [2024-10-30 14:20:11.610334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.359 [2024-10-30 14:20:11.624847] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.359 [2024-10-30 14:20:11.624862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.359 [2024-10-30 14:20:11.637929] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.359 [2024-10-30 14:20:11.637945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.359 [2024-10-30 14:20:11.650926] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.359 [2024-10-30 14:20:11.650941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.621 [2024-10-30 14:20:11.665295] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.621 [2024-10-30 14:20:11.665311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.621 [2024-10-30 14:20:11.678251] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.621 [2024-10-30 14:20:11.678266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.621 [2024-10-30 14:20:11.693015] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.621 [2024-10-30 14:20:11.693031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.621 [2024-10-30 14:20:11.705612] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.621 [2024-10-30 14:20:11.705628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.621 [2024-10-30 14:20:11.718284] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.621 [2024-10-30 14:20:11.718299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.621 [2024-10-30 14:20:11.733302] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.621 [2024-10-30 14:20:11.733318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.621 [2024-10-30 14:20:11.746126] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.621 [2024-10-30 14:20:11.746141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.621 18947.25 IOPS, 148.03 MiB/s [2024-10-30T13:20:11.920Z] [2024-10-30 14:20:11.760551] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.621 [2024-10-30 14:20:11.760566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.621 [2024-10-30 14:20:11.773526] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.621 [2024-10-30 14:20:11.773542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.621 [2024-10-30 14:20:11.786285] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.621 [2024-10-30 14:20:11.786300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.621 [2024-10-30 14:20:11.801256] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.621 [2024-10-30 14:20:11.801272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.621 [2024-10-30 14:20:11.814144] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.621 [2024-10-30 14:20:11.814159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.621 [2024-10-30 14:20:11.828614] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.621 [2024-10-30 14:20:11.828630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.621 [2024-10-30 14:20:11.841497] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.621 [2024-10-30 14:20:11.841512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.621 [2024-10-30 14:20:11.853895] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.621 [2024-10-30 14:20:11.853909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.621 [2024-10-30 14:20:11.869113] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.621 [2024-10-30 14:20:11.869129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.621 [2024-10-30 14:20:11.882239] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.621 [2024-10-30 14:20:11.882254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.621 [2024-10-30 14:20:11.896858] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.621 [2024-10-30 14:20:11.896873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.621 [2024-10-30 14:20:11.909507] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.621 [2024-10-30 14:20:11.909522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.883 [2024-10-30 14:20:11.922250] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.883 [2024-10-30 14:20:11.922265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.883 [2024-10-30 14:20:11.936623] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.883 [2024-10-30 14:20:11.936639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.883 [2024-10-30 14:20:11.949811] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.883 [2024-10-30 14:20:11.949826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.883 [2024-10-30 14:20:11.961608] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.883 [2024-10-30 14:20:11.961623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.883 [2024-10-30 14:20:11.974659] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.883 [2024-10-30 14:20:11.974674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.883 [2024-10-30 14:20:11.989109] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.883 [2024-10-30 14:20:11.989124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.883 [2024-10-30 14:20:12.001530] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.883 [2024-10-30 14:20:12.001546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.883 [2024-10-30 14:20:12.014411] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.883 [2024-10-30 14:20:12.014426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.883 [2024-10-30 14:20:12.029034] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.883 [2024-10-30 14:20:12.029050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.883 [2024-10-30 14:20:12.042312] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.883 [2024-10-30 14:20:12.042327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.883 [2024-10-30 14:20:12.057009] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.883 [2024-10-30 14:20:12.057024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.883 [2024-10-30 14:20:12.069742] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.883 [2024-10-30 14:20:12.069762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.883 [2024-10-30 14:20:12.081952] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.883 [2024-10-30 14:20:12.081968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.883 [2024-10-30 14:20:12.094354] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.883 [2024-10-30 14:20:12.094369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.883 [2024-10-30 14:20:12.108942] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.883 [2024-10-30 14:20:12.108958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.883 [2024-10-30 14:20:12.121573] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.883 [2024-10-30 14:20:12.121589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.883 [2024-10-30 14:20:12.134111] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.883 [2024-10-30 14:20:12.134126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.883 [2024-10-30 14:20:12.148764] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.883 [2024-10-30 14:20:12.148783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.883 [2024-10-30 14:20:12.161881] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.883 [2024-10-30 14:20:12.161896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.883 [2024-10-30 14:20:12.176989] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.883 [2024-10-30 14:20:12.177004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.144 [2024-10-30 14:20:12.190060] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.144 [2024-10-30 14:20:12.190075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.144 [2024-10-30 14:20:12.204691] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.144 [2024-10-30 14:20:12.204706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.144 [2024-10-30 14:20:12.217765] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.144 [2024-10-30 14:20:12.217780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.144 [2024-10-30 14:20:12.229723] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.144 [2024-10-30 14:20:12.229738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.144 [2024-10-30 14:20:12.242368] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.144 [2024-10-30 14:20:12.242383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.144 [2024-10-30 14:20:12.257144] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.144 [2024-10-30 14:20:12.257160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.144 [2024-10-30 14:20:12.270185] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.144 [2024-10-30 14:20:12.270200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.144 [2024-10-30 14:20:12.285173] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.144 [2024-10-30 14:20:12.285188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.144 [2024-10-30 14:20:12.298070] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.144 [2024-10-30 14:20:12.298085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.144 [2024-10-30 14:20:12.313023] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.144 [2024-10-30 14:20:12.313039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.144 [2024-10-30 14:20:12.325466] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.144 [2024-10-30 14:20:12.325481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.144 [2024-10-30 14:20:12.338031] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.144 [2024-10-30 14:20:12.338046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.144 [2024-10-30 14:20:12.352870] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.144 [2024-10-30 14:20:12.352885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.144 [2024-10-30 14:20:12.365464] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.144 [2024-10-30 14:20:12.365480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.144 [2024-10-30 14:20:12.377851] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.144 [2024-10-30 14:20:12.377866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.144 [2024-10-30 14:20:12.390441] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.144 [2024-10-30 14:20:12.390455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.144 [2024-10-30 14:20:12.405208] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.144 [2024-10-30 14:20:12.405226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.144 [2024-10-30 14:20:12.418082] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.144 [2024-10-30 14:20:12.418096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.144 [2024-10-30 14:20:12.433343] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.144 [2024-10-30 14:20:12.433358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.405 [2024-10-30 14:20:12.445751] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.405 [2024-10-30 14:20:12.445768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.405 [2024-10-30 14:20:12.457537] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.405 [2024-10-30 14:20:12.457553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.405 [2024-10-30 14:20:12.470410] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.405 [2024-10-30 14:20:12.470425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.405 [2024-10-30 14:20:12.485380] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.405 [2024-10-30 14:20:12.485395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.405 [2024-10-30 14:20:12.497905] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.405 [2024-10-30 14:20:12.497920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.405 [2024-10-30 14:20:12.512533] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.405 [2024-10-30 14:20:12.512548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.405 [2024-10-30 14:20:12.525504] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.405 [2024-10-30 14:20:12.525518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.405 [2024-10-30 14:20:12.537622] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.405 [2024-10-30 14:20:12.537636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.405 [2024-10-30 14:20:12.550367] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.405 [2024-10-30 14:20:12.550381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.405 [2024-10-30 14:20:12.565213] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.405 [2024-10-30 14:20:12.565228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.405 [2024-10-30 14:20:12.578658] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.405 [2024-10-30 14:20:12.578673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.405 [2024-10-30 14:20:12.593835] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.405 [2024-10-30 14:20:12.593851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.405 [2024-10-30 14:20:12.605980] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.405 [2024-10-30 14:20:12.605995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.405 [2024-10-30 14:20:12.621119] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.406 [2024-10-30 14:20:12.621135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.406 [2024-10-30 14:20:12.633948] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.406 [2024-10-30 14:20:12.633964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.406 [2024-10-30 14:20:12.646134] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.406 [2024-10-30 14:20:12.646149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.406 [2024-10-30 14:20:12.660530] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.406 [2024-10-30 14:20:12.660549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.406 [2024-10-30 14:20:12.673714] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.406 [2024-10-30 14:20:12.673729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.406 [2024-10-30 14:20:12.685405] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.406 [2024-10-30 14:20:12.685420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.406 [2024-10-30 14:20:12.698060] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.406 [2024-10-30 14:20:12.698075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.667 [2024-10-30 14:20:12.712547] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.667 [2024-10-30 14:20:12.712562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.667 [2024-10-30 14:20:12.725494] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.667 [2024-10-30 14:20:12.725509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.667 [2024-10-30 14:20:12.738178] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.667 [2024-10-30 14:20:12.738192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.667 [2024-10-30 14:20:12.752398] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.667 [2024-10-30 14:20:12.752413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.667 18945.60 IOPS, 148.01 MiB/s [2024-10-30T13:20:12.966Z] [2024-10-30 14:20:12.764569] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.667 [2024-10-30 14:20:12.764584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.667 00:33:14.667 Latency(us) 00:33:14.667 [2024-10-30T13:20:12.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:14.667 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:14.667 Nvme1n1 : 5.01 18951.09 148.06 0.00 0.00 6748.43 2184.53 11796.48 00:33:14.667 [2024-10-30T13:20:12.966Z] =================================================================================================================== 00:33:14.667 [2024-10-30T13:20:12.966Z] Total : 18951.09 148.06 0.00 0.00 6748.43 2184.53 11796.48 00:33:14.667 [2024-10-30 14:20:12.773755] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.667 [2024-10-30 14:20:12.773769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.667 [2024-10-30 14:20:12.785765] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.667 [2024-10-30 14:20:12.785779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.667 [2024-10-30 14:20:12.797754] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.667 [2024-10-30 14:20:12.797764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.667 [2024-10-30 14:20:12.809755] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.667 [2024-10-30 14:20:12.809768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.667 [2024-10-30 14:20:12.821752] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.667 [2024-10-30 14:20:12.821760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.667 [2024-10-30 14:20:12.833751] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.667 [2024-10-30 14:20:12.833760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.667 [2024-10-30 14:20:12.845751] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.667 [2024-10-30 14:20:12.845760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.667 [2024-10-30 14:20:12.857751] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.667 [2024-10-30 14:20:12.857760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.667 [2024-10-30 14:20:12.869750] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.667 [2024-10-30 14:20:12.869758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1280141) - No such process 00:33:14.667 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1280141 00:33:14.667 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:14.667 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.667 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:14.667 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.667 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:14.667 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.667 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:14.667 delay0 00:33:14.667 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.668 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:14.668 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.668 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:14.668 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.668 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:14.929 [2024-10-30 14:20:13.036204] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:21.514 Initializing NVMe Controllers 00:33:21.514 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:21.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:21.514 Initialization complete. Launching workers. 00:33:21.514 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 251, failed: 26693 00:33:21.514 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 26840, failed to submit 104 00:33:21.514 success 26761, unsuccessful 79, failed 0 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:21.514 rmmod nvme_tcp 00:33:21.514 rmmod nvme_fabrics 00:33:21.514 rmmod nvme_keyring 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1277962 ']' 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1277962 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1277962 ']' 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1277962 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1277962 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1277962' 00:33:21.514 killing process with pid 1277962 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1277962 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1277962 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:21.514 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.058 14:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:24.058 00:33:24.058 real 0m33.556s 00:33:24.058 user 0m42.573s 00:33:24.058 sys 0m12.349s 00:33:24.058 14:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:24.058 14:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:24.058 ************************************ 00:33:24.058 END TEST nvmf_zcopy 00:33:24.058 ************************************ 00:33:24.058 14:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:24.058 14:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:24.058 14:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:24.058 14:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:24.058 ************************************ 00:33:24.058 START TEST nvmf_nmic 00:33:24.058 ************************************ 00:33:24.058 14:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:24.058 * Looking for test storage... 00:33:24.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:24.058 14:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:24.058 14:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:33:24.058 14:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:24.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.058 --rc genhtml_branch_coverage=1 00:33:24.058 --rc genhtml_function_coverage=1 00:33:24.058 --rc genhtml_legend=1 00:33:24.058 --rc geninfo_all_blocks=1 00:33:24.058 --rc geninfo_unexecuted_blocks=1 00:33:24.058 00:33:24.058 ' 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:24.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.058 --rc genhtml_branch_coverage=1 00:33:24.058 --rc genhtml_function_coverage=1 00:33:24.058 --rc genhtml_legend=1 00:33:24.058 --rc geninfo_all_blocks=1 00:33:24.058 --rc geninfo_unexecuted_blocks=1 00:33:24.058 00:33:24.058 ' 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:24.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.058 --rc genhtml_branch_coverage=1 00:33:24.058 --rc genhtml_function_coverage=1 00:33:24.058 --rc genhtml_legend=1 00:33:24.058 --rc geninfo_all_blocks=1 00:33:24.058 --rc geninfo_unexecuted_blocks=1 00:33:24.058 00:33:24.058 ' 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:24.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.058 --rc genhtml_branch_coverage=1 00:33:24.058 --rc genhtml_function_coverage=1 00:33:24.058 --rc genhtml_legend=1 00:33:24.058 --rc geninfo_all_blocks=1 00:33:24.058 --rc geninfo_unexecuted_blocks=1 00:33:24.058 00:33:24.058 ' 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.058 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:24.059 14:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:32.204 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:32.204 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:33:32.204 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:32.204 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:32.205 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:32.205 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:32.205 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:32.205 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:32.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:32.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:33:32.205 00:33:32.205 --- 10.0.0.2 ping statistics --- 00:33:32.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:32.205 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:32.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:32.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:33:32.205 00:33:32.205 --- 10.0.0.1 ping statistics --- 00:33:32.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:32.205 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1286478 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1286478 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1286478 ']' 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:32.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:32.205 14:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:32.205 [2024-10-30 14:20:29.647896] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:32.205 [2024-10-30 14:20:29.649026] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:33:32.205 [2024-10-30 14:20:29.649077] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:32.205 [2024-10-30 14:20:29.746225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:32.205 [2024-10-30 14:20:29.801073] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:32.205 [2024-10-30 14:20:29.801125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:32.205 [2024-10-30 14:20:29.801135] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:32.205 [2024-10-30 14:20:29.801143] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:32.205 [2024-10-30 14:20:29.801149] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:32.205 [2024-10-30 14:20:29.803169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.205 [2024-10-30 14:20:29.803326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:32.205 [2024-10-30 14:20:29.803485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.205 [2024-10-30 14:20:29.803486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:32.205 [2024-10-30 14:20:29.880722] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:32.205 [2024-10-30 14:20:29.880856] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:32.205 [2024-10-30 14:20:29.881756] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:32.205 [2024-10-30 14:20:29.882395] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:32.206 [2024-10-30 14:20:29.882405] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:32.206 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:32.206 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:33:32.206 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:32.206 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:32.206 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:32.465 [2024-10-30 14:20:30.512351] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:32.465 Malloc0 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:32.465 [2024-10-30 14:20:30.600667] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:33:32.465 test case1: single bdev can't be used in multiple subsystems 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:33:32.465 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.466 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:32.466 [2024-10-30 14:20:30.635980] bdev.c:8192:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:33:32.466 [2024-10-30 14:20:30.636006] subsystem.c:2151:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:33:32.466 [2024-10-30 14:20:30.636015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.466 request: 00:33:32.466 { 00:33:32.466 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:32.466 "namespace": { 00:33:32.466 "bdev_name": "Malloc0", 00:33:32.466 "no_auto_visible": false 00:33:32.466 }, 00:33:32.466 "method": "nvmf_subsystem_add_ns", 00:33:32.466 "req_id": 1 00:33:32.466 } 00:33:32.466 Got JSON-RPC error response 00:33:32.466 response: 00:33:32.466 { 00:33:32.466 "code": -32602, 00:33:32.466 "message": "Invalid parameters" 00:33:32.466 } 00:33:32.466 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:32.466 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:33:32.466 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:33:32.466 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:33:32.466 Adding namespace failed - expected result. 00:33:32.466 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:33:32.466 test case2: host connect to nvmf target in multiple paths 00:33:32.466 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:32.466 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.466 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:32.466 [2024-10-30 14:20:30.648138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:32.466 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.466 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:33.036 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:33:33.609 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:33:33.609 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:33:33.609 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:33.609 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:33.609 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:33:35.525 14:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:35.525 14:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:35.525 14:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:35.525 14:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:35.525 14:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:35.525 14:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:33:35.525 14:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:35.525 [global] 00:33:35.525 thread=1 00:33:35.525 invalidate=1 00:33:35.525 rw=write 00:33:35.525 time_based=1 00:33:35.525 runtime=1 00:33:35.525 ioengine=libaio 00:33:35.525 direct=1 00:33:35.525 bs=4096 00:33:35.525 iodepth=1 00:33:35.525 norandommap=0 00:33:35.525 numjobs=1 00:33:35.525 00:33:35.525 verify_dump=1 00:33:35.525 verify_backlog=512 00:33:35.525 verify_state_save=0 00:33:35.525 do_verify=1 00:33:35.525 verify=crc32c-intel 00:33:35.525 [job0] 00:33:35.525 filename=/dev/nvme0n1 00:33:35.525 Could not set queue depth (nvme0n1) 00:33:35.786 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:35.786 fio-3.35 00:33:35.786 Starting 1 thread 00:33:37.170 00:33:37.170 job0: (groupid=0, jobs=1): err= 0: pid=1287552: Wed Oct 30 14:20:35 2024 00:33:37.170 read: IOPS=17, BW=69.7KiB/s (71.4kB/s)(72.0KiB/1033msec) 00:33:37.170 slat (nsec): min=25651, max=26211, avg=25830.61, stdev=166.26 00:33:37.170 clat (usec): min=1013, max=42014, avg=39354.40, stdev=9580.68 00:33:37.170 lat (usec): min=1038, max=42040, avg=39380.23, stdev=9580.70 00:33:37.170 clat percentiles (usec): 00:33:37.170 | 1.00th=[ 1012], 5.00th=[ 1012], 10.00th=[41157], 20.00th=[41157], 00:33:37.170 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:33:37.170 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:37.170 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:37.170 | 99.99th=[42206] 00:33:37.170 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:33:37.170 slat (usec): min=10, max=28718, avg=85.29, stdev=1267.94 00:33:37.170 clat (usec): min=243, max=732, avg=538.66, stdev=95.87 00:33:37.170 lat (usec): min=255, max=29349, avg=623.95, stdev=1275.94 00:33:37.170 clat percentiles (usec): 00:33:37.170 | 1.00th=[ 314], 5.00th=[ 363], 10.00th=[ 404], 20.00th=[ 449], 00:33:37.170 | 30.00th=[ 486], 40.00th=[ 529], 50.00th=[ 545], 60.00th=[ 570], 00:33:37.170 | 70.00th=[ 603], 80.00th=[ 627], 90.00th=[ 652], 95.00th=[ 676], 00:33:37.170 | 99.00th=[ 725], 99.50th=[ 725], 99.90th=[ 734], 99.95th=[ 734], 00:33:37.170 | 99.99th=[ 734] 00:33:37.170 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:33:37.170 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:37.171 lat (usec) : 250=0.19%, 500=30.57%, 750=65.85% 00:33:37.171 lat (msec) : 2=0.19%, 50=3.21% 00:33:37.171 cpu : usr=0.78%, sys=1.36%, ctx=534, majf=0, minf=1 00:33:37.171 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:37.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.171 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.171 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:37.171 00:33:37.171 Run status group 0 (all jobs): 00:33:37.171 READ: bw=69.7KiB/s (71.4kB/s), 69.7KiB/s-69.7KiB/s (71.4kB/s-71.4kB/s), io=72.0KiB (73.7kB), run=1033-1033msec 00:33:37.171 WRITE: bw=1983KiB/s (2030kB/s), 1983KiB/s-1983KiB/s (2030kB/s-2030kB/s), io=2048KiB (2097kB), run=1033-1033msec 00:33:37.171 00:33:37.171 Disk stats (read/write): 00:33:37.171 nvme0n1: ios=39/512, merge=0/0, ticks=1511/267, in_queue=1778, util=98.70% 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:37.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:37.171 rmmod nvme_tcp 00:33:37.171 rmmod nvme_fabrics 00:33:37.171 rmmod nvme_keyring 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1286478 ']' 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1286478 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1286478 ']' 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1286478 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1286478 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1286478' 00:33:37.171 killing process with pid 1286478 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1286478 00:33:37.171 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1286478 00:33:37.432 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:37.432 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:37.432 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:37.432 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:33:37.432 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:33:37.432 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:37.432 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:33:37.432 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:37.432 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:37.432 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:37.433 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:37.433 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:39.981 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:39.981 00:33:39.981 real 0m15.767s 00:33:39.981 user 0m37.247s 00:33:39.981 sys 0m7.395s 00:33:39.981 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:39.981 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:39.981 ************************************ 00:33:39.981 END TEST nvmf_nmic 00:33:39.981 ************************************ 00:33:39.981 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:39.981 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:39.981 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:39.981 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:39.981 ************************************ 00:33:39.981 START TEST nvmf_fio_target 00:33:39.981 ************************************ 00:33:39.981 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:39.981 * Looking for test storage... 00:33:39.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:39.981 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:39.981 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:33:39.981 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:39.981 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:39.981 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:39.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.982 --rc genhtml_branch_coverage=1 00:33:39.982 --rc genhtml_function_coverage=1 00:33:39.982 --rc genhtml_legend=1 00:33:39.982 --rc geninfo_all_blocks=1 00:33:39.982 --rc geninfo_unexecuted_blocks=1 00:33:39.982 00:33:39.982 ' 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:39.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.982 --rc genhtml_branch_coverage=1 00:33:39.982 --rc genhtml_function_coverage=1 00:33:39.982 --rc genhtml_legend=1 00:33:39.982 --rc geninfo_all_blocks=1 00:33:39.982 --rc geninfo_unexecuted_blocks=1 00:33:39.982 00:33:39.982 ' 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:39.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.982 --rc genhtml_branch_coverage=1 00:33:39.982 --rc genhtml_function_coverage=1 00:33:39.982 --rc genhtml_legend=1 00:33:39.982 --rc geninfo_all_blocks=1 00:33:39.982 --rc geninfo_unexecuted_blocks=1 00:33:39.982 00:33:39.982 ' 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:39.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.982 --rc genhtml_branch_coverage=1 00:33:39.982 --rc genhtml_function_coverage=1 00:33:39.982 --rc genhtml_legend=1 00:33:39.982 --rc geninfo_all_blocks=1 00:33:39.982 --rc geninfo_unexecuted_blocks=1 00:33:39.982 00:33:39.982 ' 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:39.982 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:39.983 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:39.983 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:39.983 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.983 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:39.983 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:39.983 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:39.983 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:39.983 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:39.983 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:48.128 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:48.128 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:48.128 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:48.128 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:48.128 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:48.128 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:48.128 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:48.128 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:48.128 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:48.128 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:33:48.128 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:48.128 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:33:48.128 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:48.128 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:33:48.128 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:48.128 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:48.128 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:48.128 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:48.128 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:48.128 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:48.128 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:48.129 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:48.129 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:48.129 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:48.129 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:48.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:48.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:33:48.129 00:33:48.129 --- 10.0.0.2 ping statistics --- 00:33:48.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:48.129 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:48.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:48.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:33:48.129 00:33:48.129 --- 10.0.0.1 ping statistics --- 00:33:48.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:48.129 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:48.129 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:48.130 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:48.130 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:33:48.130 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:48.130 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:48.130 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:48.130 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1292014 00:33:48.130 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1292014 00:33:48.130 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:48.130 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1292014 ']' 00:33:48.130 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:48.130 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:48.130 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:48.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:48.130 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:48.130 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:48.130 [2024-10-30 14:20:45.479139] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:48.130 [2024-10-30 14:20:45.480298] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:33:48.130 [2024-10-30 14:20:45.480356] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:48.130 [2024-10-30 14:20:45.580652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:48.130 [2024-10-30 14:20:45.632957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:48.130 [2024-10-30 14:20:45.633015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:48.130 [2024-10-30 14:20:45.633024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:48.130 [2024-10-30 14:20:45.633031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:48.130 [2024-10-30 14:20:45.633037] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:48.130 [2024-10-30 14:20:45.635019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:48.130 [2024-10-30 14:20:45.635173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:48.130 [2024-10-30 14:20:45.635333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.130 [2024-10-30 14:20:45.635333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:48.130 [2024-10-30 14:20:45.712169] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:48.130 [2024-10-30 14:20:45.713471] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:48.130 [2024-10-30 14:20:45.713590] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:48.130 [2024-10-30 14:20:45.714131] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:48.130 [2024-10-30 14:20:45.714194] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:48.130 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:48.130 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:33:48.130 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:48.130 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:48.130 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:48.130 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:48.130 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:48.392 [2024-10-30 14:20:46.500214] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:48.392 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:48.653 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:33:48.653 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:48.914 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:33:48.914 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:48.914 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:33:48.914 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:49.174 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:33:49.174 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:33:49.435 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:49.695 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:33:49.695 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:49.695 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:33:49.695 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:49.979 14:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:33:49.979 14:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:33:49.979 14:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:50.334 14:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:50.334 14:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:50.334 14:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:50.334 14:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:50.594 14:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:50.854 [2024-10-30 14:20:48.932082] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:50.854 14:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:33:51.114 14:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:33:51.115 14:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:51.685 14:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:33:51.685 14:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:33:51.685 14:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:51.685 14:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:33:51.685 14:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:33:51.685 14:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:33:53.598 14:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:53.598 14:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:53.598 14:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:53.598 14:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:33:53.598 14:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:53.598 14:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:33:53.598 14:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:53.598 [global] 00:33:53.598 thread=1 00:33:53.598 invalidate=1 00:33:53.598 rw=write 00:33:53.598 time_based=1 00:33:53.598 runtime=1 00:33:53.598 ioengine=libaio 00:33:53.598 direct=1 00:33:53.598 bs=4096 00:33:53.598 iodepth=1 00:33:53.598 norandommap=0 00:33:53.598 numjobs=1 00:33:53.598 00:33:53.598 verify_dump=1 00:33:53.598 verify_backlog=512 00:33:53.598 verify_state_save=0 00:33:53.598 do_verify=1 00:33:53.598 verify=crc32c-intel 00:33:53.870 [job0] 00:33:53.870 filename=/dev/nvme0n1 00:33:53.870 [job1] 00:33:53.870 filename=/dev/nvme0n2 00:33:53.870 [job2] 00:33:53.870 filename=/dev/nvme0n3 00:33:53.870 [job3] 00:33:53.870 filename=/dev/nvme0n4 00:33:53.870 Could not set queue depth (nvme0n1) 00:33:53.870 Could not set queue depth (nvme0n2) 00:33:53.870 Could not set queue depth (nvme0n3) 00:33:53.870 Could not set queue depth (nvme0n4) 00:33:54.131 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:54.131 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:54.131 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:54.131 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:54.131 fio-3.35 00:33:54.131 Starting 4 threads 00:33:55.516 00:33:55.516 job0: (groupid=0, jobs=1): err= 0: pid=1293469: Wed Oct 30 14:20:53 2024 00:33:55.516 read: IOPS=17, BW=71.4KiB/s (73.1kB/s)(72.0KiB/1009msec) 00:33:55.516 slat (nsec): min=26009, max=26776, avg=26290.50, stdev=193.07 00:33:55.516 clat (usec): min=945, max=41959, avg=39000.82, stdev=9506.81 00:33:55.516 lat (usec): min=972, max=41985, avg=39027.11, stdev=9506.69 00:33:55.516 clat percentiles (usec): 00:33:55.516 | 1.00th=[ 947], 5.00th=[ 947], 10.00th=[41157], 20.00th=[41157], 00:33:55.516 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:55.516 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:33:55.516 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:55.516 | 99.99th=[42206] 00:33:55.516 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:33:55.516 slat (nsec): min=9191, max=54389, avg=30258.26, stdev=9659.80 00:33:55.516 clat (usec): min=149, max=984, avg=561.76, stdev=148.95 00:33:55.516 lat (usec): min=158, max=1017, avg=592.01, stdev=152.52 00:33:55.516 clat percentiles (usec): 00:33:55.516 | 1.00th=[ 243], 5.00th=[ 302], 10.00th=[ 343], 20.00th=[ 441], 00:33:55.516 | 30.00th=[ 494], 40.00th=[ 537], 50.00th=[ 570], 60.00th=[ 603], 00:33:55.516 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 742], 95.00th=[ 791], 00:33:55.516 | 99.00th=[ 914], 99.50th=[ 938], 99.90th=[ 988], 99.95th=[ 988], 00:33:55.516 | 99.99th=[ 988] 00:33:55.516 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:33:55.516 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:55.516 lat (usec) : 250=1.32%, 500=29.62%, 750=57.17%, 1000=8.68% 00:33:55.516 lat (msec) : 50=3.21% 00:33:55.516 cpu : usr=0.89%, sys=2.08%, ctx=530, majf=0, minf=1 00:33:55.516 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:55.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.516 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.516 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:55.516 job1: (groupid=0, jobs=1): err= 0: pid=1293485: Wed Oct 30 14:20:53 2024 00:33:55.516 read: IOPS=17, BW=69.2KiB/s (70.8kB/s)(72.0KiB/1041msec) 00:33:55.516 slat (nsec): min=26951, max=27508, avg=27224.89, stdev=115.15 00:33:55.516 clat (usec): min=1089, max=42037, avg=39293.79, stdev=9545.59 00:33:55.516 lat (usec): min=1116, max=42064, avg=39321.01, stdev=9545.56 00:33:55.516 clat percentiles (usec): 00:33:55.516 | 1.00th=[ 1090], 5.00th=[ 1090], 10.00th=[41157], 20.00th=[41157], 00:33:55.516 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:33:55.516 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:55.516 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:55.516 | 99.99th=[42206] 00:33:55.516 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:33:55.516 slat (usec): min=6, max=838, avg=34.51, stdev=51.23 00:33:55.516 clat (usec): min=183, max=1010, avg=605.41, stdev=142.96 00:33:55.516 lat (usec): min=193, max=1449, avg=639.93, stdev=152.38 00:33:55.516 clat percentiles (usec): 00:33:55.516 | 1.00th=[ 281], 5.00th=[ 375], 10.00th=[ 420], 20.00th=[ 486], 00:33:55.516 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 611], 60.00th=[ 652], 00:33:55.516 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 783], 95.00th=[ 832], 00:33:55.516 | 99.00th=[ 914], 99.50th=[ 963], 99.90th=[ 1012], 99.95th=[ 1012], 00:33:55.516 | 99.99th=[ 1012] 00:33:55.516 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:33:55.516 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:55.516 lat (usec) : 250=0.57%, 500=22.26%, 750=60.38%, 1000=13.21% 00:33:55.516 lat (msec) : 2=0.38%, 50=3.21% 00:33:55.516 cpu : usr=1.54%, sys=1.35%, ctx=534, majf=0, minf=1 00:33:55.516 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:55.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.516 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.516 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:55.516 job2: (groupid=0, jobs=1): err= 0: pid=1293502: Wed Oct 30 14:20:53 2024 00:33:55.516 read: IOPS=19, BW=79.3KiB/s (81.2kB/s)(80.0KiB/1009msec) 00:33:55.516 slat (nsec): min=10418, max=28839, avg=25823.90, stdev=3666.03 00:33:55.516 clat (usec): min=1154, max=41998, avg=39870.45, stdev=9115.19 00:33:55.516 lat (usec): min=1183, max=42024, avg=39896.28, stdev=9114.57 00:33:55.516 clat percentiles (usec): 00:33:55.516 | 1.00th=[ 1156], 5.00th=[ 1156], 10.00th=[41157], 20.00th=[41681], 00:33:55.516 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:33:55.516 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:55.516 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:55.516 | 99.99th=[42206] 00:33:55.516 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:33:55.516 slat (nsec): min=2909, max=48393, avg=11925.25, stdev=7673.52 00:33:55.516 clat (usec): min=229, max=2508, avg=392.77, stdev=178.27 00:33:55.516 lat (usec): min=262, max=2520, avg=404.70, stdev=180.74 00:33:55.516 clat percentiles (usec): 00:33:55.516 | 1.00th=[ 258], 5.00th=[ 277], 10.00th=[ 289], 20.00th=[ 302], 00:33:55.516 | 30.00th=[ 326], 40.00th=[ 343], 50.00th=[ 355], 60.00th=[ 367], 00:33:55.516 | 70.00th=[ 379], 80.00th=[ 412], 90.00th=[ 553], 95.00th=[ 644], 00:33:55.516 | 99.00th=[ 881], 99.50th=[ 1483], 99.90th=[ 2507], 99.95th=[ 2507], 00:33:55.516 | 99.99th=[ 2507] 00:33:55.516 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:33:55.516 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:55.516 lat (usec) : 250=0.19%, 500=83.46%, 750=9.40%, 1000=2.44% 00:33:55.516 lat (msec) : 2=0.56%, 4=0.38%, 50=3.57% 00:33:55.516 cpu : usr=0.40%, sys=0.50%, ctx=534, majf=0, minf=1 00:33:55.516 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:55.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.516 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.516 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:55.516 job3: (groupid=0, jobs=1): err= 0: pid=1293508: Wed Oct 30 14:20:53 2024 00:33:55.516 read: IOPS=17, BW=69.4KiB/s (71.1kB/s)(72.0KiB/1037msec) 00:33:55.516 slat (nsec): min=27835, max=28431, avg=28109.50, stdev=157.28 00:33:55.516 clat (usec): min=40751, max=41142, avg=40958.79, stdev=132.26 00:33:55.516 lat (usec): min=40779, max=41170, avg=40986.90, stdev=132.21 00:33:55.516 clat percentiles (usec): 00:33:55.516 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:33:55.516 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:55.516 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:55.516 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:55.516 | 99.99th=[41157] 00:33:55.516 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:33:55.516 slat (nsec): min=9337, max=60496, avg=34049.19, stdev=8199.53 00:33:55.516 clat (usec): min=235, max=850, avg=537.22, stdev=97.53 00:33:55.516 lat (usec): min=245, max=862, avg=571.27, stdev=97.65 00:33:55.516 clat percentiles (usec): 00:33:55.516 | 1.00th=[ 306], 5.00th=[ 351], 10.00th=[ 392], 20.00th=[ 449], 00:33:55.516 | 30.00th=[ 498], 40.00th=[ 537], 50.00th=[ 553], 60.00th=[ 578], 00:33:55.516 | 70.00th=[ 594], 80.00th=[ 619], 90.00th=[ 644], 95.00th=[ 676], 00:33:55.516 | 99.00th=[ 725], 99.50th=[ 734], 99.90th=[ 848], 99.95th=[ 848], 00:33:55.516 | 99.99th=[ 848] 00:33:55.516 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:33:55.516 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:55.516 lat (usec) : 250=0.19%, 500=29.06%, 750=66.98%, 1000=0.38% 00:33:55.516 lat (msec) : 50=3.40% 00:33:55.516 cpu : usr=1.06%, sys=2.12%, ctx=531, majf=0, minf=1 00:33:55.516 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:55.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.517 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.517 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:55.517 00:33:55.517 Run status group 0 (all jobs): 00:33:55.517 READ: bw=284KiB/s (291kB/s), 69.2KiB/s-79.3KiB/s (70.8kB/s-81.2kB/s), io=296KiB (303kB), run=1009-1041msec 00:33:55.517 WRITE: bw=7869KiB/s (8058kB/s), 1967KiB/s-2030KiB/s (2015kB/s-2078kB/s), io=8192KiB (8389kB), run=1009-1041msec 00:33:55.517 00:33:55.517 Disk stats (read/write): 00:33:55.517 nvme0n1: ios=63/512, merge=0/0, ticks=547/230, in_queue=777, util=86.67% 00:33:55.517 nvme0n2: ios=59/512, merge=0/0, ticks=607/255, in_queue=862, util=90.70% 00:33:55.517 nvme0n3: ios=72/512, merge=0/0, ticks=840/197, in_queue=1037, util=91.86% 00:33:55.517 nvme0n4: ios=76/512, merge=0/0, ticks=695/230, in_queue=925, util=97.22% 00:33:55.517 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:33:55.517 [global] 00:33:55.517 thread=1 00:33:55.517 invalidate=1 00:33:55.517 rw=randwrite 00:33:55.517 time_based=1 00:33:55.517 runtime=1 00:33:55.517 ioengine=libaio 00:33:55.517 direct=1 00:33:55.517 bs=4096 00:33:55.517 iodepth=1 00:33:55.517 norandommap=0 00:33:55.517 numjobs=1 00:33:55.517 00:33:55.517 verify_dump=1 00:33:55.517 verify_backlog=512 00:33:55.517 verify_state_save=0 00:33:55.517 do_verify=1 00:33:55.517 verify=crc32c-intel 00:33:55.517 [job0] 00:33:55.517 filename=/dev/nvme0n1 00:33:55.517 [job1] 00:33:55.517 filename=/dev/nvme0n2 00:33:55.517 [job2] 00:33:55.517 filename=/dev/nvme0n3 00:33:55.517 [job3] 00:33:55.517 filename=/dev/nvme0n4 00:33:55.517 Could not set queue depth (nvme0n1) 00:33:55.517 Could not set queue depth (nvme0n2) 00:33:55.517 Could not set queue depth (nvme0n3) 00:33:55.517 Could not set queue depth (nvme0n4) 00:33:55.778 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:55.778 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:55.778 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:55.778 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:55.778 fio-3.35 00:33:55.778 Starting 4 threads 00:33:57.167 00:33:57.167 job0: (groupid=0, jobs=1): err= 0: pid=1293978: Wed Oct 30 14:20:55 2024 00:33:57.167 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:57.167 slat (nsec): min=26874, max=55527, avg=27920.52, stdev=2714.91 00:33:57.167 clat (usec): min=607, max=1195, avg=1019.55, stdev=74.51 00:33:57.167 lat (usec): min=635, max=1222, avg=1047.47, stdev=74.17 00:33:57.167 clat percentiles (usec): 00:33:57.167 | 1.00th=[ 824], 5.00th=[ 889], 10.00th=[ 922], 20.00th=[ 963], 00:33:57.167 | 30.00th=[ 988], 40.00th=[ 1004], 50.00th=[ 1029], 60.00th=[ 1045], 00:33:57.167 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1123], 00:33:57.167 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1188], 99.95th=[ 1188], 00:33:57.167 | 99.99th=[ 1188] 00:33:57.167 write: IOPS=722, BW=2889KiB/s (2958kB/s)(2892KiB/1001msec); 0 zone resets 00:33:57.167 slat (nsec): min=9274, max=74877, avg=31693.21, stdev=9156.03 00:33:57.167 clat (usec): min=213, max=1059, avg=595.83, stdev=118.09 00:33:57.167 lat (usec): min=224, max=1094, avg=627.53, stdev=121.23 00:33:57.167 clat percentiles (usec): 00:33:57.167 | 1.00th=[ 306], 5.00th=[ 388], 10.00th=[ 437], 20.00th=[ 502], 00:33:57.167 | 30.00th=[ 545], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 635], 00:33:57.167 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 742], 95.00th=[ 775], 00:33:57.167 | 99.00th=[ 848], 99.50th=[ 898], 99.90th=[ 1057], 99.95th=[ 1057], 00:33:57.167 | 99.99th=[ 1057] 00:33:57.167 bw ( KiB/s): min= 4096, max= 4096, per=31.41%, avg=4096.00, stdev= 0.00, samples=1 00:33:57.167 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:57.167 lat (usec) : 250=0.32%, 500=11.26%, 750=42.43%, 1000=19.35% 00:33:57.167 lat (msec) : 2=26.64% 00:33:57.167 cpu : usr=3.40%, sys=4.10%, ctx=1238, majf=0, minf=1 00:33:57.167 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.167 issued rwts: total=512,723,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.167 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:57.167 job1: (groupid=0, jobs=1): err= 0: pid=1293982: Wed Oct 30 14:20:55 2024 00:33:57.167 read: IOPS=590, BW=2362KiB/s (2418kB/s)(2364KiB/1001msec) 00:33:57.167 slat (nsec): min=7078, max=46152, avg=23429.63, stdev=7663.30 00:33:57.167 clat (usec): min=354, max=41161, avg=833.90, stdev=1663.52 00:33:57.167 lat (usec): min=381, max=41188, avg=857.33, stdev=1663.73 00:33:57.167 clat percentiles (usec): 00:33:57.167 | 1.00th=[ 529], 5.00th=[ 635], 10.00th=[ 660], 20.00th=[ 701], 00:33:57.167 | 30.00th=[ 742], 40.00th=[ 758], 50.00th=[ 783], 60.00th=[ 799], 00:33:57.167 | 70.00th=[ 807], 80.00th=[ 824], 90.00th=[ 857], 95.00th=[ 873], 00:33:57.167 | 99.00th=[ 938], 99.50th=[ 955], 99.90th=[41157], 99.95th=[41157], 00:33:57.167 | 99.99th=[41157] 00:33:57.167 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:33:57.167 slat (nsec): min=9781, max=59945, avg=28209.40, stdev=10040.77 00:33:57.167 clat (usec): min=103, max=984, avg=441.76, stdev=97.98 00:33:57.167 lat (usec): min=114, max=1020, avg=469.97, stdev=101.13 00:33:57.167 clat percentiles (usec): 00:33:57.167 | 1.00th=[ 243], 5.00th=[ 297], 10.00th=[ 326], 20.00th=[ 355], 00:33:57.167 | 30.00th=[ 392], 40.00th=[ 433], 50.00th=[ 449], 60.00th=[ 465], 00:33:57.167 | 70.00th=[ 478], 80.00th=[ 494], 90.00th=[ 537], 95.00th=[ 619], 00:33:57.167 | 99.00th=[ 750], 99.50th=[ 873], 99.90th=[ 914], 99.95th=[ 988], 00:33:57.167 | 99.99th=[ 988] 00:33:57.167 bw ( KiB/s): min= 4096, max= 4096, per=31.41%, avg=4096.00, stdev= 0.00, samples=1 00:33:57.167 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:57.167 lat (usec) : 250=0.80%, 500=51.64%, 750=22.97%, 1000=24.52% 00:33:57.167 lat (msec) : 50=0.06% 00:33:57.167 cpu : usr=2.50%, sys=4.20%, ctx=1618, majf=0, minf=1 00:33:57.167 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.167 issued rwts: total=591,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.167 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:57.167 job2: (groupid=0, jobs=1): err= 0: pid=1294002: Wed Oct 30 14:20:55 2024 00:33:57.167 read: IOPS=16, BW=67.5KiB/s (69.1kB/s)(68.0KiB/1007msec) 00:33:57.167 slat (nsec): min=25260, max=25971, avg=25515.00, stdev=220.40 00:33:57.167 clat (usec): min=1029, max=42023, avg=39545.06, stdev=9925.35 00:33:57.167 lat (usec): min=1055, max=42049, avg=39570.57, stdev=9925.23 00:33:57.167 clat percentiles (usec): 00:33:57.167 | 1.00th=[ 1029], 5.00th=[ 1029], 10.00th=[41681], 20.00th=[41681], 00:33:57.167 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:33:57.167 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:57.167 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:57.167 | 99.99th=[42206] 00:33:57.167 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:33:57.167 slat (nsec): min=9211, max=62869, avg=28298.29, stdev=8665.19 00:33:57.167 clat (usec): min=240, max=4176, avg=616.23, stdev=196.91 00:33:57.167 lat (usec): min=271, max=4208, avg=644.53, stdev=198.93 00:33:57.167 clat percentiles (usec): 00:33:57.167 | 1.00th=[ 293], 5.00th=[ 396], 10.00th=[ 441], 20.00th=[ 506], 00:33:57.167 | 30.00th=[ 562], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 652], 00:33:57.167 | 70.00th=[ 676], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 783], 00:33:57.167 | 99.00th=[ 857], 99.50th=[ 889], 99.90th=[ 4178], 99.95th=[ 4178], 00:33:57.167 | 99.99th=[ 4178] 00:33:57.167 bw ( KiB/s): min= 4096, max= 4096, per=31.41%, avg=4096.00, stdev= 0.00, samples=1 00:33:57.167 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:57.167 lat (usec) : 250=0.19%, 500=17.39%, 750=70.32%, 1000=8.70% 00:33:57.167 lat (msec) : 2=0.19%, 10=0.19%, 50=3.02% 00:33:57.167 cpu : usr=0.50%, sys=1.69%, ctx=529, majf=0, minf=2 00:33:57.167 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.167 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.167 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:57.167 job3: (groupid=0, jobs=1): err= 0: pid=1294008: Wed Oct 30 14:20:55 2024 00:33:57.167 read: IOPS=653, BW=2613KiB/s (2676kB/s)(2616KiB/1001msec) 00:33:57.167 slat (nsec): min=7203, max=58526, avg=25506.67, stdev=5302.97 00:33:57.167 clat (usec): min=395, max=1074, avg=821.81, stdev=130.72 00:33:57.167 lat (usec): min=421, max=1100, avg=847.32, stdev=131.46 00:33:57.167 clat percentiles (usec): 00:33:57.167 | 1.00th=[ 457], 5.00th=[ 594], 10.00th=[ 644], 20.00th=[ 693], 00:33:57.167 | 30.00th=[ 758], 40.00th=[ 799], 50.00th=[ 840], 60.00th=[ 881], 00:33:57.167 | 70.00th=[ 906], 80.00th=[ 930], 90.00th=[ 979], 95.00th=[ 1004], 00:33:57.167 | 99.00th=[ 1057], 99.50th=[ 1057], 99.90th=[ 1074], 99.95th=[ 1074], 00:33:57.167 | 99.99th=[ 1074] 00:33:57.167 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:33:57.167 slat (nsec): min=9718, max=64738, avg=29955.74, stdev=8864.10 00:33:57.167 clat (usec): min=129, max=737, avg=393.14, stdev=109.31 00:33:57.167 lat (usec): min=162, max=771, avg=423.10, stdev=110.47 00:33:57.167 clat percentiles (usec): 00:33:57.167 | 1.00th=[ 202], 5.00th=[ 229], 10.00th=[ 273], 20.00th=[ 310], 00:33:57.167 | 30.00th=[ 326], 40.00th=[ 343], 50.00th=[ 367], 60.00th=[ 412], 00:33:57.167 | 70.00th=[ 445], 80.00th=[ 486], 90.00th=[ 562], 95.00th=[ 594], 00:33:57.167 | 99.00th=[ 685], 99.50th=[ 701], 99.90th=[ 725], 99.95th=[ 742], 00:33:57.167 | 99.99th=[ 742] 00:33:57.167 bw ( KiB/s): min= 4096, max= 4096, per=31.41%, avg=4096.00, stdev= 0.00, samples=1 00:33:57.167 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:57.167 lat (usec) : 250=4.59%, 500=47.08%, 750=20.32%, 1000=25.69% 00:33:57.167 lat (msec) : 2=2.32% 00:33:57.167 cpu : usr=2.20%, sys=5.20%, ctx=1679, majf=0, minf=1 00:33:57.168 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.168 issued rwts: total=654,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.168 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:57.168 00:33:57.168 Run status group 0 (all jobs): 00:33:57.168 READ: bw=7047KiB/s (7216kB/s), 67.5KiB/s-2613KiB/s (69.1kB/s-2676kB/s), io=7096KiB (7266kB), run=1001-1007msec 00:33:57.168 WRITE: bw=12.7MiB/s (13.4MB/s), 2034KiB/s-4092KiB/s (2083kB/s-4190kB/s), io=12.8MiB (13.4MB), run=1001-1007msec 00:33:57.168 00:33:57.168 Disk stats (read/write): 00:33:57.168 nvme0n1: ios=523/512, merge=0/0, ticks=627/234, in_queue=861, util=94.39% 00:33:57.168 nvme0n2: ios=547/801, merge=0/0, ticks=1425/358, in_queue=1783, util=97.35% 00:33:57.168 nvme0n3: ios=61/512, merge=0/0, ticks=766/303, in_queue=1069, util=95.14% 00:33:57.168 nvme0n4: ios=553/911, merge=0/0, ticks=1030/342, in_queue=1372, util=96.90% 00:33:57.168 14:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:33:57.168 [global] 00:33:57.168 thread=1 00:33:57.168 invalidate=1 00:33:57.168 rw=write 00:33:57.168 time_based=1 00:33:57.168 runtime=1 00:33:57.168 ioengine=libaio 00:33:57.168 direct=1 00:33:57.168 bs=4096 00:33:57.168 iodepth=128 00:33:57.168 norandommap=0 00:33:57.168 numjobs=1 00:33:57.168 00:33:57.168 verify_dump=1 00:33:57.168 verify_backlog=512 00:33:57.168 verify_state_save=0 00:33:57.168 do_verify=1 00:33:57.168 verify=crc32c-intel 00:33:57.168 [job0] 00:33:57.168 filename=/dev/nvme0n1 00:33:57.168 [job1] 00:33:57.168 filename=/dev/nvme0n2 00:33:57.168 [job2] 00:33:57.168 filename=/dev/nvme0n3 00:33:57.168 [job3] 00:33:57.168 filename=/dev/nvme0n4 00:33:57.168 Could not set queue depth (nvme0n1) 00:33:57.168 Could not set queue depth (nvme0n2) 00:33:57.168 Could not set queue depth (nvme0n3) 00:33:57.168 Could not set queue depth (nvme0n4) 00:33:57.429 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:57.429 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:57.429 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:57.429 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:57.429 fio-3.35 00:33:57.429 Starting 4 threads 00:33:58.814 00:33:58.814 job0: (groupid=0, jobs=1): err= 0: pid=1294422: Wed Oct 30 14:20:56 2024 00:33:58.814 read: IOPS=6334, BW=24.7MiB/s (25.9MB/s)(24.9MiB/1008msec) 00:33:58.814 slat (nsec): min=932, max=24858k, avg=82351.34, stdev=663542.78 00:33:58.814 clat (usec): min=1232, max=57374, avg=11049.58, stdev=9576.81 00:33:58.814 lat (usec): min=1764, max=57380, avg=11131.93, stdev=9634.13 00:33:58.814 clat percentiles (usec): 00:33:58.814 | 1.00th=[ 3163], 5.00th=[ 5014], 10.00th=[ 5669], 20.00th=[ 6783], 00:33:58.814 | 30.00th=[ 7177], 40.00th=[ 7504], 50.00th=[ 7898], 60.00th=[ 8586], 00:33:58.814 | 70.00th=[10028], 80.00th=[11207], 90.00th=[16909], 95.00th=[32900], 00:33:58.814 | 99.00th=[56361], 99.50th=[57410], 99.90th=[57410], 99.95th=[57410], 00:33:58.814 | 99.99th=[57410] 00:33:58.814 write: IOPS=6603, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1008msec); 0 zone resets 00:33:58.814 slat (nsec): min=1600, max=14124k, avg=64717.21, stdev=456570.32 00:33:58.814 clat (usec): min=1180, max=25761, avg=8364.68, stdev=3031.97 00:33:58.814 lat (usec): min=1189, max=28287, avg=8429.39, stdev=3053.56 00:33:58.814 clat percentiles (usec): 00:33:58.814 | 1.00th=[ 3916], 5.00th=[ 4752], 10.00th=[ 5342], 20.00th=[ 6456], 00:33:58.814 | 30.00th=[ 6849], 40.00th=[ 7177], 50.00th=[ 7373], 60.00th=[ 7767], 00:33:58.814 | 70.00th=[ 9372], 80.00th=[10421], 90.00th=[13042], 95.00th=[13698], 00:33:58.814 | 99.00th=[19530], 99.50th=[25560], 99.90th=[25822], 99.95th=[25822], 00:33:58.814 | 99.99th=[25822] 00:33:58.814 bw ( KiB/s): min=20480, max=32768, per=26.75%, avg=26624.00, stdev=8688.93, samples=2 00:33:58.814 iops : min= 5120, max= 8192, avg=6656.00, stdev=2172.23, samples=2 00:33:58.814 lat (msec) : 2=0.25%, 4=1.40%, 10=71.83%, 20=21.81%, 50=3.68% 00:33:58.814 lat (msec) : 100=1.03% 00:33:58.814 cpu : usr=4.37%, sys=7.25%, ctx=424, majf=0, minf=1 00:33:58.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:33:58.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:58.814 issued rwts: total=6385,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.814 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:58.814 job1: (groupid=0, jobs=1): err= 0: pid=1294442: Wed Oct 30 14:20:56 2024 00:33:58.814 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:33:58.814 slat (nsec): min=875, max=55695k, avg=107894.93, stdev=1371012.72 00:33:58.814 clat (msec): min=4, max=107, avg=14.00, stdev=18.27 00:33:58.814 lat (msec): min=4, max=107, avg=14.11, stdev=18.37 00:33:58.814 clat percentiles (msec): 00:33:58.814 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 7], 00:33:58.814 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 9], 00:33:58.814 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 29], 95.00th=[ 58], 00:33:58.814 | 99.00th=[ 95], 99.50th=[ 108], 99.90th=[ 108], 99.95th=[ 108], 00:33:58.814 | 99.99th=[ 108] 00:33:58.814 write: IOPS=5763, BW=22.5MiB/s (23.6MB/s)(22.5MiB/1001msec); 0 zone resets 00:33:58.814 slat (nsec): min=1519, max=4165.8k, avg=63232.07, stdev=336198.66 00:33:58.814 clat (usec): min=913, max=57484, avg=8310.99, stdev=2536.26 00:33:58.814 lat (usec): min=917, max=57493, avg=8374.22, stdev=2550.98 00:33:58.814 clat percentiles (usec): 00:33:58.814 | 1.00th=[ 4359], 5.00th=[ 5080], 10.00th=[ 6259], 20.00th=[ 7111], 00:33:58.814 | 30.00th=[ 7308], 40.00th=[ 7570], 50.00th=[ 7832], 60.00th=[ 8094], 00:33:58.814 | 70.00th=[ 8291], 80.00th=[ 8848], 90.00th=[10683], 95.00th=[15270], 00:33:58.814 | 99.00th=[16188], 99.50th=[16909], 99.90th=[17957], 99.95th=[17957], 00:33:58.814 | 99.99th=[57410] 00:33:58.814 bw ( KiB/s): min=16384, max=28752, per=22.68%, avg=22568.00, stdev=8745.50, samples=2 00:33:58.814 iops : min= 4096, max= 7188, avg=5642.00, stdev=2186.37, samples=2 00:33:58.814 lat (usec) : 1000=0.09% 00:33:58.814 lat (msec) : 4=0.03%, 10=84.90%, 20=8.59%, 50=2.96%, 100=3.17% 00:33:58.814 lat (msec) : 250=0.28% 00:33:58.814 cpu : usr=4.00%, sys=4.20%, ctx=566, majf=0, minf=1 00:33:58.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:33:58.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:58.814 issued rwts: total=5632,5769,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.814 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:58.814 job2: (groupid=0, jobs=1): err= 0: pid=1294460: Wed Oct 30 14:20:56 2024 00:33:58.814 read: IOPS=6408, BW=25.0MiB/s (26.2MB/s)(25.2MiB/1008msec) 00:33:58.814 slat (nsec): min=940, max=9222.6k, avg=75130.64, stdev=566658.06 00:33:58.814 clat (usec): min=1795, max=22963, avg=9889.82, stdev=2662.54 00:33:58.814 lat (usec): min=1837, max=22971, avg=9964.95, stdev=2701.34 00:33:58.814 clat percentiles (usec): 00:33:58.814 | 1.00th=[ 3916], 5.00th=[ 5866], 10.00th=[ 7111], 20.00th=[ 7963], 00:33:58.814 | 30.00th=[ 8455], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9765], 00:33:58.814 | 70.00th=[10814], 80.00th=[11994], 90.00th=[13173], 95.00th=[14484], 00:33:58.814 | 99.00th=[17695], 99.50th=[19530], 99.90th=[22414], 99.95th=[22938], 00:33:58.814 | 99.99th=[22938] 00:33:58.814 write: IOPS=6603, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1008msec); 0 zone resets 00:33:58.814 slat (nsec): min=1663, max=8008.6k, avg=65760.96, stdev=438308.26 00:33:58.814 clat (usec): min=1322, max=27104, avg=9574.43, stdev=3897.44 00:33:58.814 lat (usec): min=1332, max=27106, avg=9640.19, stdev=3922.08 00:33:58.814 clat percentiles (usec): 00:33:58.814 | 1.00th=[ 1876], 5.00th=[ 5080], 10.00th=[ 5669], 20.00th=[ 6587], 00:33:58.814 | 30.00th=[ 7373], 40.00th=[ 8160], 50.00th=[ 8586], 60.00th=[ 9503], 00:33:58.814 | 70.00th=[10552], 80.00th=[12387], 90.00th=[15795], 95.00th=[17171], 00:33:58.814 | 99.00th=[19268], 99.50th=[22938], 99.90th=[26084], 99.95th=[27132], 00:33:58.814 | 99.99th=[27132] 00:33:58.814 bw ( KiB/s): min=24576, max=28672, per=26.75%, avg=26624.00, stdev=2896.31, samples=2 00:33:58.814 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:33:58.814 lat (msec) : 2=0.67%, 4=1.75%, 10=62.48%, 20=34.39%, 50=0.72% 00:33:58.814 cpu : usr=4.97%, sys=7.45%, ctx=454, majf=0, minf=1 00:33:58.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:33:58.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:58.814 issued rwts: total=6460,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.814 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:58.814 job3: (groupid=0, jobs=1): err= 0: pid=1294467: Wed Oct 30 14:20:56 2024 00:33:58.814 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:33:58.814 slat (nsec): min=915, max=18042k, avg=86160.43, stdev=614279.21 00:33:58.814 clat (usec): min=5420, max=64202, avg=10428.40, stdev=6312.16 00:33:58.814 lat (usec): min=5424, max=79159, avg=10514.56, stdev=6388.45 00:33:58.814 clat percentiles (usec): 00:33:58.814 | 1.00th=[ 5932], 5.00th=[ 6587], 10.00th=[ 6915], 20.00th=[ 7701], 00:33:58.814 | 30.00th=[ 8160], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9241], 00:33:58.814 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11863], 95.00th=[26608], 00:33:58.814 | 99.00th=[39060], 99.50th=[39060], 99.90th=[63177], 99.95th=[64226], 00:33:58.814 | 99.99th=[64226] 00:33:58.814 write: IOPS=5975, BW=23.3MiB/s (24.5MB/s)(23.4MiB/1004msec); 0 zone resets 00:33:58.814 slat (nsec): min=1557, max=10081k, avg=81395.20, stdev=446373.49 00:33:58.814 clat (usec): min=1200, max=83390, avg=11441.49, stdev=10601.99 00:33:58.814 lat (usec): min=1212, max=83399, avg=11522.89, stdev=10656.67 00:33:58.814 clat percentiles (usec): 00:33:58.814 | 1.00th=[ 4424], 5.00th=[ 6128], 10.00th=[ 7242], 20.00th=[ 7898], 00:33:58.814 | 30.00th=[ 8160], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 8979], 00:33:58.814 | 70.00th=[ 9372], 80.00th=[10290], 90.00th=[16319], 95.00th=[26608], 00:33:58.814 | 99.00th=[73925], 99.50th=[79168], 99.90th=[83362], 99.95th=[83362], 00:33:58.814 | 99.99th=[83362] 00:33:58.814 bw ( KiB/s): min=20768, max=26208, per=23.60%, avg=23488.00, stdev=3846.66, samples=2 00:33:58.814 iops : min= 5192, max= 6552, avg=5872.00, stdev=961.67, samples=2 00:33:58.814 lat (msec) : 2=0.07%, 4=0.18%, 10=74.67%, 20=18.90%, 50=4.57% 00:33:58.814 lat (msec) : 100=1.61% 00:33:58.814 cpu : usr=3.99%, sys=4.99%, ctx=707, majf=0, minf=2 00:33:58.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:33:58.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:58.814 issued rwts: total=5632,5999,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.814 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:58.814 00:33:58.814 Run status group 0 (all jobs): 00:33:58.814 READ: bw=93.4MiB/s (98.0MB/s), 21.9MiB/s-25.0MiB/s (23.0MB/s-26.2MB/s), io=94.2MiB (98.8MB), run=1001-1008msec 00:33:58.814 WRITE: bw=97.2MiB/s (102MB/s), 22.5MiB/s-25.8MiB/s (23.6MB/s-27.0MB/s), io=98.0MiB (103MB), run=1001-1008msec 00:33:58.814 00:33:58.814 Disk stats (read/write): 00:33:58.814 nvme0n1: ios=6021/6144, merge=0/0, ticks=39251/35699, in_queue=74950, util=96.49% 00:33:58.814 nvme0n2: ios=4131/4410, merge=0/0, ticks=27231/14991, in_queue=42222, util=91.34% 00:33:58.815 nvme0n3: ios=5608/5632, merge=0/0, ticks=51425/48145, in_queue=99570, util=97.37% 00:33:58.815 nvme0n4: ios=4407/4608, merge=0/0, ticks=19832/20824, in_queue=40656, util=89.43% 00:33:58.815 14:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:33:58.815 [global] 00:33:58.815 thread=1 00:33:58.815 invalidate=1 00:33:58.815 rw=randwrite 00:33:58.815 time_based=1 00:33:58.815 runtime=1 00:33:58.815 ioengine=libaio 00:33:58.815 direct=1 00:33:58.815 bs=4096 00:33:58.815 iodepth=128 00:33:58.815 norandommap=0 00:33:58.815 numjobs=1 00:33:58.815 00:33:58.815 verify_dump=1 00:33:58.815 verify_backlog=512 00:33:58.815 verify_state_save=0 00:33:58.815 do_verify=1 00:33:58.815 verify=crc32c-intel 00:33:58.815 [job0] 00:33:58.815 filename=/dev/nvme0n1 00:33:58.815 [job1] 00:33:58.815 filename=/dev/nvme0n2 00:33:58.815 [job2] 00:33:58.815 filename=/dev/nvme0n3 00:33:58.815 [job3] 00:33:58.815 filename=/dev/nvme0n4 00:33:58.815 Could not set queue depth (nvme0n1) 00:33:58.815 Could not set queue depth (nvme0n2) 00:33:58.815 Could not set queue depth (nvme0n3) 00:33:58.815 Could not set queue depth (nvme0n4) 00:33:59.075 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:59.075 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:59.075 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:59.075 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:59.075 fio-3.35 00:33:59.075 Starting 4 threads 00:34:00.464 00:34:00.464 job0: (groupid=0, jobs=1): err= 0: pid=1294874: Wed Oct 30 14:20:58 2024 00:34:00.464 read: IOPS=3522, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1016msec) 00:34:00.464 slat (nsec): min=954, max=43068k, avg=159552.94, stdev=1371509.24 00:34:00.464 clat (usec): min=4567, max=91238, avg=21624.04, stdev=17260.02 00:34:00.464 lat (usec): min=4573, max=91258, avg=21783.59, stdev=17338.48 00:34:00.464 clat percentiles (usec): 00:34:00.464 | 1.00th=[ 6194], 5.00th=[ 7439], 10.00th=[ 7898], 20.00th=[ 8586], 00:34:00.464 | 30.00th=[ 9372], 40.00th=[10945], 50.00th=[15139], 60.00th=[20841], 00:34:00.464 | 70.00th=[26346], 80.00th=[32637], 90.00th=[42730], 95.00th=[50594], 00:34:00.464 | 99.00th=[90702], 99.50th=[90702], 99.90th=[90702], 99.95th=[91751], 00:34:00.464 | 99.99th=[91751] 00:34:00.464 write: IOPS=3527, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1016msec); 0 zone resets 00:34:00.464 slat (nsec): min=1574, max=16020k, avg=116292.70, stdev=650862.10 00:34:00.464 clat (usec): min=4293, max=91012, avg=14251.57, stdev=13626.41 00:34:00.464 lat (usec): min=4302, max=91014, avg=14367.86, stdev=13696.11 00:34:00.464 clat percentiles (usec): 00:34:00.464 | 1.00th=[ 5800], 5.00th=[ 6718], 10.00th=[ 7111], 20.00th=[ 7898], 00:34:00.464 | 30.00th=[ 8291], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9503], 00:34:00.464 | 70.00th=[11994], 80.00th=[13042], 90.00th=[29492], 95.00th=[53740], 00:34:00.464 | 99.00th=[68682], 99.50th=[71828], 99.90th=[76022], 99.95th=[76022], 00:34:00.464 | 99.99th=[90702] 00:34:00.464 bw ( KiB/s): min=12288, max=16384, per=17.03%, avg=14336.00, stdev=2896.31, samples=2 00:34:00.464 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:34:00.464 lat (msec) : 10=50.30%, 20=22.59%, 50=21.67%, 100=5.44% 00:34:00.464 cpu : usr=1.77%, sys=3.35%, ctx=391, majf=0, minf=1 00:34:00.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:34:00.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:00.464 issued rwts: total=3579,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:00.464 job1: (groupid=0, jobs=1): err= 0: pid=1294878: Wed Oct 30 14:20:58 2024 00:34:00.464 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:34:00.464 slat (nsec): min=964, max=7707.8k, avg=88594.31, stdev=497200.52 00:34:00.464 clat (usec): min=6830, max=49126, avg=12726.12, stdev=5407.96 00:34:00.464 lat (usec): min=6833, max=49132, avg=12814.72, stdev=5444.45 00:34:00.464 clat percentiles (usec): 00:34:00.464 | 1.00th=[ 6849], 5.00th=[ 8094], 10.00th=[ 8979], 20.00th=[ 9503], 00:34:00.464 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[11076], 00:34:00.464 | 70.00th=[12911], 80.00th=[15139], 90.00th=[21365], 95.00th=[23200], 00:34:00.464 | 99.00th=[31851], 99.50th=[41681], 99.90th=[49021], 99.95th=[49021], 00:34:00.464 | 99.99th=[49021] 00:34:00.464 write: IOPS=3430, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1008msec); 0 zone resets 00:34:00.464 slat (nsec): min=1799, max=15952k, avg=204917.51, stdev=906413.57 00:34:00.464 clat (usec): min=4971, max=72373, avg=25139.57, stdev=20119.48 00:34:00.464 lat (usec): min=4981, max=72383, avg=25344.49, stdev=20246.29 00:34:00.464 clat percentiles (usec): 00:34:00.464 | 1.00th=[ 5735], 5.00th=[ 7046], 10.00th=[ 8160], 20.00th=[ 8717], 00:34:00.464 | 30.00th=[ 9372], 40.00th=[11994], 50.00th=[14877], 60.00th=[21627], 00:34:00.464 | 70.00th=[32900], 80.00th=[46400], 90.00th=[60556], 95.00th=[64226], 00:34:00.464 | 99.00th=[70779], 99.50th=[70779], 99.90th=[72877], 99.95th=[72877], 00:34:00.464 | 99.99th=[72877] 00:34:00.464 bw ( KiB/s): min= 9344, max=17296, per=15.83%, avg=13320.00, stdev=5622.91, samples=2 00:34:00.464 iops : min= 2336, max= 4324, avg=3330.00, stdev=1405.73, samples=2 00:34:00.464 lat (msec) : 10=37.40%, 20=34.70%, 50=18.36%, 100=9.54% 00:34:00.464 cpu : usr=2.18%, sys=3.97%, ctx=367, majf=0, minf=1 00:34:00.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:34:00.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:00.464 issued rwts: total=3072,3458,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:00.464 job2: (groupid=0, jobs=1): err= 0: pid=1294884: Wed Oct 30 14:20:58 2024 00:34:00.464 read: IOPS=7220, BW=28.2MiB/s (29.6MB/s)(28.5MiB/1009msec) 00:34:00.464 slat (nsec): min=1014, max=7960.7k, avg=63850.25, stdev=439045.67 00:34:00.464 clat (usec): min=3766, max=17974, avg=8326.74, stdev=2222.37 00:34:00.464 lat (usec): min=3768, max=17977, avg=8390.59, stdev=2244.22 00:34:00.464 clat percentiles (usec): 00:34:00.464 | 1.00th=[ 4752], 5.00th=[ 5407], 10.00th=[ 5735], 20.00th=[ 6652], 00:34:00.464 | 30.00th=[ 6980], 40.00th=[ 7439], 50.00th=[ 7832], 60.00th=[ 8455], 00:34:00.464 | 70.00th=[ 9241], 80.00th=[10028], 90.00th=[11207], 95.00th=[12387], 00:34:00.464 | 99.00th=[15270], 99.50th=[16712], 99.90th=[17695], 99.95th=[17957], 00:34:00.464 | 99.99th=[17957] 00:34:00.464 write: IOPS=7611, BW=29.7MiB/s (31.2MB/s)(30.0MiB/1009msec); 0 zone resets 00:34:00.464 slat (nsec): min=1651, max=6148.6k, avg=65226.70, stdev=356245.74 00:34:00.464 clat (usec): min=1200, max=18234, avg=8711.22, stdev=3269.93 00:34:00.464 lat (usec): min=1211, max=18248, avg=8776.44, stdev=3284.87 00:34:00.464 clat percentiles (usec): 00:34:00.464 | 1.00th=[ 3982], 5.00th=[ 4752], 10.00th=[ 5080], 20.00th=[ 6128], 00:34:00.464 | 30.00th=[ 6718], 40.00th=[ 7242], 50.00th=[ 7767], 60.00th=[ 8455], 00:34:00.464 | 70.00th=[ 9634], 80.00th=[11994], 90.00th=[13566], 95.00th=[15795], 00:34:00.464 | 99.00th=[17695], 99.50th=[17957], 99.90th=[17957], 99.95th=[18220], 00:34:00.464 | 99.99th=[18220] 00:34:00.464 bw ( KiB/s): min=29352, max=32000, per=36.45%, avg=30676.00, stdev=1872.42, samples=2 00:34:00.464 iops : min= 7338, max= 8000, avg=7669.00, stdev=468.10, samples=2 00:34:00.464 lat (msec) : 2=0.01%, 4=0.53%, 10=75.34%, 20=24.12% 00:34:00.464 cpu : usr=5.36%, sys=6.65%, ctx=672, majf=0, minf=1 00:34:00.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:00.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:00.464 issued rwts: total=7285,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:00.464 job3: (groupid=0, jobs=1): err= 0: pid=1294892: Wed Oct 30 14:20:58 2024 00:34:00.464 read: IOPS=6336, BW=24.8MiB/s (26.0MB/s)(24.9MiB/1008msec) 00:34:00.464 slat (nsec): min=911, max=8208.1k, avg=75751.00, stdev=503621.71 00:34:00.464 clat (usec): min=2928, max=24835, avg=9180.95, stdev=3216.65 00:34:00.464 lat (usec): min=2935, max=24837, avg=9256.70, stdev=3250.47 00:34:00.464 clat percentiles (usec): 00:34:00.464 | 1.00th=[ 4555], 5.00th=[ 6063], 10.00th=[ 6456], 20.00th=[ 7177], 00:34:00.464 | 30.00th=[ 7570], 40.00th=[ 7963], 50.00th=[ 8356], 60.00th=[ 8848], 00:34:00.465 | 70.00th=[ 9372], 80.00th=[10290], 90.00th=[12911], 95.00th=[16450], 00:34:00.465 | 99.00th=[22414], 99.50th=[23200], 99.90th=[23725], 99.95th=[24773], 00:34:00.465 | 99.99th=[24773] 00:34:00.465 write: IOPS=6603, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1008msec); 0 zone resets 00:34:00.465 slat (nsec): min=1590, max=6212.2k, avg=72758.35, stdev=383657.14 00:34:00.465 clat (usec): min=1197, max=24835, avg=10405.96, stdev=4784.60 00:34:00.465 lat (usec): min=1208, max=24837, avg=10478.72, stdev=4813.22 00:34:00.465 clat percentiles (usec): 00:34:00.465 | 1.00th=[ 3163], 5.00th=[ 5014], 10.00th=[ 5211], 20.00th=[ 6063], 00:34:00.465 | 30.00th=[ 6980], 40.00th=[ 7504], 50.00th=[ 8586], 60.00th=[10290], 00:34:00.465 | 70.00th=[13829], 80.00th=[16057], 90.00th=[17433], 95.00th=[18482], 00:34:00.465 | 99.00th=[20317], 99.50th=[20841], 99.90th=[23462], 99.95th=[23725], 00:34:00.465 | 99.99th=[24773] 00:34:00.465 bw ( KiB/s): min=24576, max=28672, per=31.63%, avg=26624.00, stdev=2896.31, samples=2 00:34:00.465 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:34:00.465 lat (msec) : 2=0.18%, 4=0.96%, 10=66.20%, 20=30.50%, 50=2.16% 00:34:00.465 cpu : usr=4.07%, sys=7.25%, ctx=556, majf=0, minf=2 00:34:00.465 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:00.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:00.465 issued rwts: total=6387,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.465 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:00.465 00:34:00.465 Run status group 0 (all jobs): 00:34:00.465 READ: bw=78.1MiB/s (81.9MB/s), 11.9MiB/s-28.2MiB/s (12.5MB/s-29.6MB/s), io=79.4MiB (83.2MB), run=1008-1016msec 00:34:00.465 WRITE: bw=82.2MiB/s (86.2MB/s), 13.4MiB/s-29.7MiB/s (14.1MB/s-31.2MB/s), io=83.5MiB (87.6MB), run=1008-1016msec 00:34:00.465 00:34:00.465 Disk stats (read/write): 00:34:00.465 nvme0n1: ios=2593/2874, merge=0/0, ticks=15815/11682, in_queue=27497, util=97.19% 00:34:00.465 nvme0n2: ios=2921/3072, merge=0/0, ticks=11050/23688, in_queue=34738, util=99.18% 00:34:00.465 nvme0n3: ios=6182/6279, merge=0/0, ticks=48584/52325, in_queue=100909, util=99.47% 00:34:00.465 nvme0n4: ios=5168/5430, merge=0/0, ticks=45088/56600, in_queue=101688, util=100.00% 00:34:00.465 14:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:00.465 14:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1295191 00:34:00.465 14:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:00.465 14:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:00.465 [global] 00:34:00.465 thread=1 00:34:00.465 invalidate=1 00:34:00.465 rw=read 00:34:00.465 time_based=1 00:34:00.465 runtime=10 00:34:00.465 ioengine=libaio 00:34:00.465 direct=1 00:34:00.465 bs=4096 00:34:00.465 iodepth=1 00:34:00.465 norandommap=1 00:34:00.465 numjobs=1 00:34:00.465 00:34:00.465 [job0] 00:34:00.465 filename=/dev/nvme0n1 00:34:00.465 [job1] 00:34:00.465 filename=/dev/nvme0n2 00:34:00.465 [job2] 00:34:00.465 filename=/dev/nvme0n3 00:34:00.465 [job3] 00:34:00.465 filename=/dev/nvme0n4 00:34:00.465 Could not set queue depth (nvme0n1) 00:34:00.465 Could not set queue depth (nvme0n2) 00:34:00.465 Could not set queue depth (nvme0n3) 00:34:00.465 Could not set queue depth (nvme0n4) 00:34:00.726 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:00.726 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:00.726 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:00.726 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:00.726 fio-3.35 00:34:00.726 Starting 4 threads 00:34:03.274 14:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:03.537 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10407936, buflen=4096 00:34:03.537 fio: pid=1295422, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:03.537 14:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:03.798 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=11812864, buflen=4096 00:34:03.798 fio: pid=1295416, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:03.798 14:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:03.798 14:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:03.798 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=16052224, buflen=4096 00:34:03.798 fio: pid=1295400, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:03.798 14:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:03.798 14:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:04.059 14:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:04.059 14:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:04.059 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=311296, buflen=4096 00:34:04.059 fio: pid=1295406, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:04.059 00:34:04.059 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1295400: Wed Oct 30 14:21:02 2024 00:34:04.059 read: IOPS=1314, BW=5257KiB/s (5383kB/s)(15.3MiB/2982msec) 00:34:04.059 slat (usec): min=6, max=15203, avg=30.51, stdev=311.68 00:34:04.059 clat (usec): min=187, max=2471, avg=719.25, stdev=140.60 00:34:04.059 lat (usec): min=195, max=15958, avg=749.76, stdev=343.84 00:34:04.059 clat percentiles (usec): 00:34:04.059 | 1.00th=[ 347], 5.00th=[ 510], 10.00th=[ 562], 20.00th=[ 619], 00:34:04.059 | 30.00th=[ 660], 40.00th=[ 701], 50.00th=[ 725], 60.00th=[ 758], 00:34:04.059 | 70.00th=[ 783], 80.00th=[ 816], 90.00th=[ 857], 95.00th=[ 881], 00:34:04.059 | 99.00th=[ 963], 99.50th=[ 1106], 99.90th=[ 2089], 99.95th=[ 2180], 00:34:04.059 | 99.99th=[ 2474] 00:34:04.059 bw ( KiB/s): min= 5128, max= 5656, per=45.00%, avg=5319.40, stdev=200.43, samples=5 00:34:04.059 iops : min= 1282, max= 1414, avg=1329.80, stdev=50.11, samples=5 00:34:04.059 lat (usec) : 250=0.18%, 500=4.34%, 750=53.65%, 1000=41.12% 00:34:04.060 lat (msec) : 2=0.56%, 4=0.13% 00:34:04.060 cpu : usr=1.81%, sys=4.93%, ctx=3922, majf=0, minf=1 00:34:04.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.060 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.060 issued rwts: total=3920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:04.060 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1295406: Wed Oct 30 14:21:02 2024 00:34:04.060 read: IOPS=24, BW=95.4KiB/s (97.6kB/s)(304KiB/3188msec) 00:34:04.060 slat (usec): min=23, max=20606, avg=431.30, stdev=2624.11 00:34:04.060 clat (usec): min=1770, max=43192, avg=41214.22, stdev=4608.26 00:34:04.060 lat (usec): min=1805, max=61963, avg=41650.85, stdev=5321.35 00:34:04.060 clat percentiles (usec): 00:34:04.060 | 1.00th=[ 1778], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:04.060 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:34:04.060 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:04.060 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:34:04.060 | 99.99th=[43254] 00:34:04.060 bw ( KiB/s): min= 96, max= 96, per=0.81%, avg=96.00, stdev= 0.00, samples=6 00:34:04.060 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=6 00:34:04.060 lat (msec) : 2=1.30%, 50=97.40% 00:34:04.060 cpu : usr=0.16%, sys=0.00%, ctx=79, majf=0, minf=2 00:34:04.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.060 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.060 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:04.060 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1295416: Wed Oct 30 14:21:02 2024 00:34:04.060 read: IOPS=1032, BW=4127KiB/s (4226kB/s)(11.3MiB/2795msec) 00:34:04.060 slat (nsec): min=5914, max=62019, avg=26790.44, stdev=5070.94 00:34:04.060 clat (usec): min=397, max=1609, avg=928.34, stdev=187.07 00:34:04.060 lat (usec): min=424, max=1635, avg=955.13, stdev=187.20 00:34:04.060 clat percentiles (usec): 00:34:04.060 | 1.00th=[ 562], 5.00th=[ 709], 10.00th=[ 766], 20.00th=[ 807], 00:34:04.060 | 30.00th=[ 832], 40.00th=[ 848], 50.00th=[ 873], 60.00th=[ 889], 00:34:04.060 | 70.00th=[ 930], 80.00th=[ 1090], 90.00th=[ 1254], 95.00th=[ 1319], 00:34:04.060 | 99.00th=[ 1401], 99.50th=[ 1434], 99.90th=[ 1565], 99.95th=[ 1582], 00:34:04.060 | 99.99th=[ 1614] 00:34:04.060 bw ( KiB/s): min= 3712, max= 4640, per=36.06%, avg=4262.40, stdev=484.69, samples=5 00:34:04.060 iops : min= 928, max= 1160, avg=1065.60, stdev=121.17, samples=5 00:34:04.060 lat (usec) : 500=0.59%, 750=7.11%, 1000=67.76% 00:34:04.060 lat (msec) : 2=24.51% 00:34:04.060 cpu : usr=1.04%, sys=4.87%, ctx=2885, majf=0, minf=2 00:34:04.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.060 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.060 issued rwts: total=2885,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:04.060 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1295422: Wed Oct 30 14:21:02 2024 00:34:04.060 read: IOPS=973, BW=3894KiB/s (3988kB/s)(9.93MiB/2610msec) 00:34:04.060 slat (nsec): min=6601, max=60061, avg=26651.28, stdev=3456.69 00:34:04.060 clat (usec): min=345, max=1971, avg=985.51, stdev=139.90 00:34:04.060 lat (usec): min=372, max=1998, avg=1012.17, stdev=140.33 00:34:04.060 clat percentiles (usec): 00:34:04.060 | 1.00th=[ 611], 5.00th=[ 734], 10.00th=[ 799], 20.00th=[ 881], 00:34:04.060 | 30.00th=[ 922], 40.00th=[ 971], 50.00th=[ 1004], 60.00th=[ 1029], 00:34:04.060 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1156], 95.00th=[ 1188], 00:34:04.060 | 99.00th=[ 1270], 99.50th=[ 1336], 99.90th=[ 1434], 99.95th=[ 1598], 00:34:04.060 | 99.99th=[ 1975] 00:34:04.060 bw ( KiB/s): min= 3808, max= 4080, per=33.28%, avg=3934.40, stdev=98.70, samples=5 00:34:04.060 iops : min= 952, max= 1020, avg=983.60, stdev=24.67, samples=5 00:34:04.060 lat (usec) : 500=0.16%, 750=5.59%, 1000=44.14% 00:34:04.060 lat (msec) : 2=50.08% 00:34:04.060 cpu : usr=1.95%, sys=3.64%, ctx=2543, majf=0, minf=2 00:34:04.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.060 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.060 issued rwts: total=2542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:04.060 00:34:04.060 Run status group 0 (all jobs): 00:34:04.060 READ: bw=11.5MiB/s (12.1MB/s), 95.4KiB/s-5257KiB/s (97.6kB/s-5383kB/s), io=36.8MiB (38.6MB), run=2610-3188msec 00:34:04.060 00:34:04.060 Disk stats (read/write): 00:34:04.060 nvme0n1: ios=3785/0, merge=0/0, ticks=2405/0, in_queue=2405, util=95.39% 00:34:04.060 nvme0n2: ios=74/0, merge=0/0, ticks=3052/0, in_queue=3052, util=95.04% 00:34:04.060 nvme0n3: ios=2731/0, merge=0/0, ticks=2187/0, in_queue=2187, util=96.03% 00:34:04.060 nvme0n4: ios=2540/0, merge=0/0, ticks=2316/0, in_queue=2316, util=96.42% 00:34:04.321 14:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:04.321 14:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:04.583 14:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:04.583 14:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:04.583 14:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:04.583 14:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:04.844 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:04.844 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:05.105 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:05.105 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1295191 00:34:05.105 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:05.105 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:05.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:05.105 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:05.105 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:34:05.105 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:05.105 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:05.105 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:05.105 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:05.106 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:34:05.106 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:05.106 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:05.106 nvmf hotplug test: fio failed as expected 00:34:05.106 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:05.367 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:05.367 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:05.367 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:05.367 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:05.367 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:05.367 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:05.367 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:05.367 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:05.367 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:05.367 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:05.367 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:05.367 rmmod nvme_tcp 00:34:05.367 rmmod nvme_fabrics 00:34:05.367 rmmod nvme_keyring 00:34:05.367 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:05.367 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:05.367 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:05.367 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1292014 ']' 00:34:05.367 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1292014 00:34:05.367 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1292014 ']' 00:34:05.367 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1292014 00:34:05.367 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:34:05.367 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:05.367 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1292014 00:34:05.367 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:05.368 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:05.368 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1292014' 00:34:05.368 killing process with pid 1292014 00:34:05.368 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1292014 00:34:05.368 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1292014 00:34:05.628 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:05.628 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:05.628 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:05.628 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:05.628 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:05.628 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:05.628 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:05.628 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:05.628 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:05.628 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.628 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:05.628 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.542 14:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:07.542 00:34:07.542 real 0m28.096s 00:34:07.542 user 2m13.858s 00:34:07.542 sys 0m12.683s 00:34:07.542 14:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:07.542 14:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:07.542 ************************************ 00:34:07.542 END TEST nvmf_fio_target 00:34:07.542 ************************************ 00:34:07.803 14:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:07.803 14:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:07.803 14:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:07.803 14:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:07.803 ************************************ 00:34:07.803 START TEST nvmf_bdevio 00:34:07.803 ************************************ 00:34:07.803 14:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:07.803 * Looking for test storage... 00:34:07.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:07.803 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:07.803 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:34:07.803 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:07.803 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:07.803 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:07.803 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:07.803 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:07.804 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:07.804 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:07.804 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:07.804 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:08.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.065 --rc genhtml_branch_coverage=1 00:34:08.065 --rc genhtml_function_coverage=1 00:34:08.065 --rc genhtml_legend=1 00:34:08.065 --rc geninfo_all_blocks=1 00:34:08.065 --rc geninfo_unexecuted_blocks=1 00:34:08.065 00:34:08.065 ' 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:08.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.065 --rc genhtml_branch_coverage=1 00:34:08.065 --rc genhtml_function_coverage=1 00:34:08.065 --rc genhtml_legend=1 00:34:08.065 --rc geninfo_all_blocks=1 00:34:08.065 --rc geninfo_unexecuted_blocks=1 00:34:08.065 00:34:08.065 ' 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:08.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.065 --rc genhtml_branch_coverage=1 00:34:08.065 --rc genhtml_function_coverage=1 00:34:08.065 --rc genhtml_legend=1 00:34:08.065 --rc geninfo_all_blocks=1 00:34:08.065 --rc geninfo_unexecuted_blocks=1 00:34:08.065 00:34:08.065 ' 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:08.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.065 --rc genhtml_branch_coverage=1 00:34:08.065 --rc genhtml_function_coverage=1 00:34:08.065 --rc genhtml_legend=1 00:34:08.065 --rc geninfo_all_blocks=1 00:34:08.065 --rc geninfo_unexecuted_blocks=1 00:34:08.065 00:34:08.065 ' 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:08.065 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:08.066 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:08.066 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:08.066 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:08.066 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:08.066 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:08.066 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:08.066 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:08.066 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:08.066 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:08.066 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:08.066 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:08.066 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:08.066 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:08.066 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.066 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:08.066 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:08.066 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:08.066 14:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:16.207 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:16.207 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:16.207 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:16.208 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:16.208 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:16.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:16.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:34:16.208 00:34:16.208 --- 10.0.0.2 ping statistics --- 00:34:16.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.208 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:16.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:16.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:34:16.208 00:34:16.208 --- 10.0.0.1 ping statistics --- 00:34:16.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.208 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1301045 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1301045 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1301045 ']' 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:16.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:16.208 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:16.208 [2024-10-30 14:21:13.804533] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:16.208 [2024-10-30 14:21:13.805652] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:34:16.208 [2024-10-30 14:21:13.805703] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:16.208 [2024-10-30 14:21:13.905628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:16.208 [2024-10-30 14:21:13.957876] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:16.208 [2024-10-30 14:21:13.957929] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:16.208 [2024-10-30 14:21:13.957938] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:16.208 [2024-10-30 14:21:13.957951] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:16.208 [2024-10-30 14:21:13.957957] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:16.208 [2024-10-30 14:21:13.960009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:16.208 [2024-10-30 14:21:13.960231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:16.208 [2024-10-30 14:21:13.960390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:16.208 [2024-10-30 14:21:13.960392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:16.208 [2024-10-30 14:21:14.037397] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:16.208 [2024-10-30 14:21:14.038324] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:16.208 [2024-10-30 14:21:14.038597] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:16.208 [2024-10-30 14:21:14.039252] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:16.208 [2024-10-30 14:21:14.039257] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:16.470 [2024-10-30 14:21:14.685390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:16.470 Malloc0 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.470 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:16.732 [2024-10-30 14:21:14.773552] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.732 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.732 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:16.732 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:16.732 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:34:16.732 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:34:16.732 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:16.732 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:16.732 { 00:34:16.732 "params": { 00:34:16.732 "name": "Nvme$subsystem", 00:34:16.732 "trtype": "$TEST_TRANSPORT", 00:34:16.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:16.732 "adrfam": "ipv4", 00:34:16.732 "trsvcid": "$NVMF_PORT", 00:34:16.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:16.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:16.732 "hdgst": ${hdgst:-false}, 00:34:16.732 "ddgst": ${ddgst:-false} 00:34:16.732 }, 00:34:16.732 "method": "bdev_nvme_attach_controller" 00:34:16.732 } 00:34:16.732 EOF 00:34:16.732 )") 00:34:16.732 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:34:16.732 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:34:16.732 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:34:16.732 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:16.732 "params": { 00:34:16.732 "name": "Nvme1", 00:34:16.732 "trtype": "tcp", 00:34:16.732 "traddr": "10.0.0.2", 00:34:16.732 "adrfam": "ipv4", 00:34:16.732 "trsvcid": "4420", 00:34:16.732 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:16.732 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:16.732 "hdgst": false, 00:34:16.732 "ddgst": false 00:34:16.732 }, 00:34:16.732 "method": "bdev_nvme_attach_controller" 00:34:16.732 }' 00:34:16.732 [2024-10-30 14:21:14.833135] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:34:16.732 [2024-10-30 14:21:14.833207] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301325 ] 00:34:16.732 [2024-10-30 14:21:14.927934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:16.732 [2024-10-30 14:21:14.984354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:16.732 [2024-10-30 14:21:14.984515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.732 [2024-10-30 14:21:14.984515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:16.994 I/O targets: 00:34:16.994 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:16.994 00:34:16.994 00:34:16.994 CUnit - A unit testing framework for C - Version 2.1-3 00:34:16.994 http://cunit.sourceforge.net/ 00:34:16.994 00:34:16.994 00:34:16.994 Suite: bdevio tests on: Nvme1n1 00:34:16.994 Test: blockdev write read block ...passed 00:34:16.994 Test: blockdev write zeroes read block ...passed 00:34:16.994 Test: blockdev write zeroes read no split ...passed 00:34:17.255 Test: blockdev write zeroes read split ...passed 00:34:17.255 Test: blockdev write zeroes read split partial ...passed 00:34:17.255 Test: blockdev reset ...[2024-10-30 14:21:15.315542] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:17.255 [2024-10-30 14:21:15.315643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc7b30 (9): Bad file descriptor 00:34:17.255 [2024-10-30 14:21:15.361789] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:34:17.255 passed 00:34:17.255 Test: blockdev write read 8 blocks ...passed 00:34:17.255 Test: blockdev write read size > 128k ...passed 00:34:17.255 Test: blockdev write read invalid size ...passed 00:34:17.255 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:17.255 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:17.255 Test: blockdev write read max offset ...passed 00:34:17.255 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:17.255 Test: blockdev writev readv 8 blocks ...passed 00:34:17.255 Test: blockdev writev readv 30 x 1block ...passed 00:34:17.255 Test: blockdev writev readv block ...passed 00:34:17.516 Test: blockdev writev readv size > 128k ...passed 00:34:17.516 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:17.516 Test: blockdev comparev and writev ...[2024-10-30 14:21:15.589820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:17.516 [2024-10-30 14:21:15.589869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.516 [2024-10-30 14:21:15.589886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:17.516 [2024-10-30 14:21:15.589895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:17.516 [2024-10-30 14:21:15.590565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:17.516 [2024-10-30 14:21:15.590578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:17.516 [2024-10-30 14:21:15.590593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:17.516 [2024-10-30 14:21:15.590600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:17.516 [2024-10-30 14:21:15.591249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:17.516 [2024-10-30 14:21:15.591261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:17.516 [2024-10-30 14:21:15.591275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:17.516 [2024-10-30 14:21:15.591283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:17.516 [2024-10-30 14:21:15.591939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:17.516 [2024-10-30 14:21:15.591951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:17.516 [2024-10-30 14:21:15.591965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:17.516 [2024-10-30 14:21:15.591973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:17.516 passed 00:34:17.516 Test: blockdev nvme passthru rw ...passed 00:34:17.516 Test: blockdev nvme passthru vendor specific ...[2024-10-30 14:21:15.676650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:17.516 [2024-10-30 14:21:15.676668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:17.516 [2024-10-30 14:21:15.677091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:17.516 [2024-10-30 14:21:15.677103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:17.516 [2024-10-30 14:21:15.677477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:17.516 [2024-10-30 14:21:15.677489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:17.516 [2024-10-30 14:21:15.677896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:17.516 [2024-10-30 14:21:15.677907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:17.516 passed 00:34:17.516 Test: blockdev nvme admin passthru ...passed 00:34:17.516 Test: blockdev copy ...passed 00:34:17.516 00:34:17.516 Run Summary: Type Total Ran Passed Failed Inactive 00:34:17.516 suites 1 1 n/a 0 0 00:34:17.516 tests 23 23 23 0 0 00:34:17.516 asserts 152 152 152 0 n/a 00:34:17.516 00:34:17.516 Elapsed time = 1.110 seconds 00:34:17.777 14:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:17.777 14:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.777 14:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:17.777 14:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.777 14:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:17.777 14:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:17.777 14:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:17.777 14:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:17.777 14:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:17.777 14:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:17.777 14:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:17.777 14:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:17.777 rmmod nvme_tcp 00:34:17.777 rmmod nvme_fabrics 00:34:17.777 rmmod nvme_keyring 00:34:17.777 14:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:17.777 14:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:17.777 14:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:17.777 14:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1301045 ']' 00:34:17.777 14:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1301045 00:34:17.777 14:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1301045 ']' 00:34:17.777 14:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1301045 00:34:17.778 14:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:34:17.778 14:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:17.778 14:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1301045 00:34:17.778 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:34:17.778 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:34:17.778 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1301045' 00:34:17.778 killing process with pid 1301045 00:34:17.778 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1301045 00:34:17.778 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1301045 00:34:18.038 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:18.038 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:18.038 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:18.038 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:18.038 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:34:18.038 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:18.038 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:34:18.038 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:18.038 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:18.038 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.038 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:18.038 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:20.589 14:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:20.589 00:34:20.589 real 0m12.381s 00:34:20.589 user 0m9.591s 00:34:20.589 sys 0m6.583s 00:34:20.589 14:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:20.589 14:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:20.589 ************************************ 00:34:20.589 END TEST nvmf_bdevio 00:34:20.589 ************************************ 00:34:20.589 14:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:20.589 00:34:20.589 real 5m0.594s 00:34:20.589 user 10m10.033s 00:34:20.589 sys 2m5.802s 00:34:20.589 14:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:20.589 14:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:20.589 ************************************ 00:34:20.589 END TEST nvmf_target_core_interrupt_mode 00:34:20.589 ************************************ 00:34:20.589 14:21:18 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:20.589 14:21:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:20.589 14:21:18 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:20.589 14:21:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:20.589 ************************************ 00:34:20.589 START TEST nvmf_interrupt 00:34:20.589 ************************************ 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:20.589 * Looking for test storage... 00:34:20.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:20.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.589 --rc genhtml_branch_coverage=1 00:34:20.589 --rc genhtml_function_coverage=1 00:34:20.589 --rc genhtml_legend=1 00:34:20.589 --rc geninfo_all_blocks=1 00:34:20.589 --rc geninfo_unexecuted_blocks=1 00:34:20.589 00:34:20.589 ' 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:20.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.589 --rc genhtml_branch_coverage=1 00:34:20.589 --rc genhtml_function_coverage=1 00:34:20.589 --rc genhtml_legend=1 00:34:20.589 --rc geninfo_all_blocks=1 00:34:20.589 --rc geninfo_unexecuted_blocks=1 00:34:20.589 00:34:20.589 ' 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:20.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.589 --rc genhtml_branch_coverage=1 00:34:20.589 --rc genhtml_function_coverage=1 00:34:20.589 --rc genhtml_legend=1 00:34:20.589 --rc geninfo_all_blocks=1 00:34:20.589 --rc geninfo_unexecuted_blocks=1 00:34:20.589 00:34:20.589 ' 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:20.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.589 --rc genhtml_branch_coverage=1 00:34:20.589 --rc genhtml_function_coverage=1 00:34:20.589 --rc genhtml_legend=1 00:34:20.589 --rc geninfo_all_blocks=1 00:34:20.589 --rc geninfo_unexecuted_blocks=1 00:34:20.589 00:34:20.589 ' 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:20.589 14:21:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:20.590 14:21:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:28.736 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:28.736 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:28.736 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:28.736 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:28.736 14:21:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:28.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:28.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.560 ms 00:34:28.736 00:34:28.736 --- 10.0.0.2 ping statistics --- 00:34:28.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.736 rtt min/avg/max/mdev = 0.560/0.560/0.560/0.000 ms 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:28.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:28.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:34:28.736 00:34:28.736 --- 10.0.0.1 ping statistics --- 00:34:28.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.736 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1305667 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1305667 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1305667 ']' 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:28.736 14:21:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:28.737 14:21:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:28.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:28.737 14:21:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:28.737 14:21:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.737 [2024-10-30 14:21:26.256567] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:28.737 [2024-10-30 14:21:26.257727] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:34:28.737 [2024-10-30 14:21:26.257786] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:28.737 [2024-10-30 14:21:26.356088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:28.737 [2024-10-30 14:21:26.406792] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:28.737 [2024-10-30 14:21:26.406844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:28.737 [2024-10-30 14:21:26.406853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:28.737 [2024-10-30 14:21:26.406860] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:28.737 [2024-10-30 14:21:26.406866] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:28.737 [2024-10-30 14:21:26.408385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:28.737 [2024-10-30 14:21:26.408389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:28.737 [2024-10-30 14:21:26.484592] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:28.737 [2024-10-30 14:21:26.485256] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:28.737 [2024-10-30 14:21:26.485523] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:34:28.997 5000+0 records in 00:34:28.997 5000+0 records out 00:34:28.997 10240000 bytes (10 MB, 9.8 MiB) copied, 0.018246 s, 561 MB/s 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.997 AIO0 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.997 [2024-10-30 14:21:27.177388] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.997 [2024-10-30 14:21:27.221837] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1305667 0 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1305667 0 idle 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1305667 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1305667 -w 256 00:34:28.997 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1305667 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.32 reactor_0' 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1305667 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.32 reactor_0 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1305667 1 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1305667 1 idle 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1305667 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1305667 -w 256 00:34:29.258 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1305671 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1305671 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1306037 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1305667 0 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1305667 0 busy 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1305667 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1305667 -w 256 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1305667 root 20 0 128.2g 44928 32256 R 53.3 0.0 0:00.41 reactor_0' 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1305667 root 20 0 128.2g 44928 32256 R 53.3 0.0 0:00.41 reactor_0 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=53.3 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=53 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1305667 1 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1305667 1 busy 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1305667 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1305667 -w 256 00:34:29.519 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:29.780 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1305671 root 20 0 128.2g 44928 32256 R 93.8 0.0 0:00.23 reactor_1' 00:34:29.780 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1305671 root 20 0 128.2g 44928 32256 R 93.8 0.0 0:00.23 reactor_1 00:34:29.780 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:29.780 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:29.780 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:34:29.780 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:34:29.780 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:29.780 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:29.780 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:29.780 14:21:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:29.780 14:21:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1306037 00:34:39.779 Initializing NVMe Controllers 00:34:39.779 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:39.779 Controller IO queue size 256, less than required. 00:34:39.779 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:39.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:39.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:39.779 Initialization complete. Launching workers. 00:34:39.779 ======================================================== 00:34:39.779 Latency(us) 00:34:39.779 Device Information : IOPS MiB/s Average min max 00:34:39.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 18263.11 71.34 14022.31 3975.12 32351.10 00:34:39.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19915.30 77.79 12856.13 8005.69 29071.94 00:34:39.779 ======================================================== 00:34:39.779 Total : 38178.41 149.13 13413.99 3975.12 32351.10 00:34:39.779 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1305667 0 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1305667 0 idle 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1305667 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1305667 -w 256 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1305667 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.31 reactor_0' 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1305667 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.31 reactor_0 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1305667 1 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1305667 1 idle 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1305667 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1305667 -w 256 00:34:39.779 14:21:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:40.041 14:21:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1305671 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:34:40.041 14:21:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1305671 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:34:40.041 14:21:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:40.041 14:21:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:40.041 14:21:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:40.041 14:21:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:40.041 14:21:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:40.041 14:21:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:40.041 14:21:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:40.041 14:21:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:40.041 14:21:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:40.611 14:21:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:34:40.611 14:21:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:34:40.611 14:21:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:40.611 14:21:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:40.611 14:21:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:34:43.162 14:21:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:43.162 14:21:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:43.162 14:21:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:43.162 14:21:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:43.162 14:21:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:43.162 14:21:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:34:43.162 14:21:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:43.162 14:21:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1305667 0 00:34:43.162 14:21:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1305667 0 idle 00:34:43.162 14:21:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1305667 00:34:43.162 14:21:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:43.162 14:21:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:43.162 14:21:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:43.162 14:21:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:43.162 14:21:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:43.162 14:21:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:43.162 14:21:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:43.162 14:21:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:43.162 14:21:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:43.162 14:21:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1305667 -w 256 00:34:43.162 14:21:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1305667 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.66 reactor_0' 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1305667 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.66 reactor_0 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1305667 1 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1305667 1 idle 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1305667 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1305667 -w 256 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1305671 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1' 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1305671 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:43.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:43.162 14:21:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:43.162 rmmod nvme_tcp 00:34:43.162 rmmod nvme_fabrics 00:34:43.423 rmmod nvme_keyring 00:34:43.423 14:21:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:43.423 14:21:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:34:43.423 14:21:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:34:43.423 14:21:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1305667 ']' 00:34:43.423 14:21:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1305667 00:34:43.423 14:21:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1305667 ']' 00:34:43.423 14:21:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1305667 00:34:43.423 14:21:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:34:43.423 14:21:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:43.423 14:21:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1305667 00:34:43.423 14:21:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:43.423 14:21:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:43.423 14:21:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1305667' 00:34:43.423 killing process with pid 1305667 00:34:43.423 14:21:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1305667 00:34:43.423 14:21:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1305667 00:34:43.423 14:21:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:43.423 14:21:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:43.423 14:21:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:43.423 14:21:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:34:43.684 14:21:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:34:43.684 14:21:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:43.684 14:21:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:34:43.684 14:21:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:43.684 14:21:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:43.684 14:21:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:43.684 14:21:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:43.684 14:21:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.596 14:21:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:45.596 00:34:45.596 real 0m25.380s 00:34:45.596 user 0m40.166s 00:34:45.596 sys 0m9.851s 00:34:45.596 14:21:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:45.596 14:21:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:45.596 ************************************ 00:34:45.596 END TEST nvmf_interrupt 00:34:45.596 ************************************ 00:34:45.596 00:34:45.596 real 29m55.065s 00:34:45.596 user 61m18.174s 00:34:45.596 sys 10m8.727s 00:34:45.596 14:21:43 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:45.596 14:21:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.596 ************************************ 00:34:45.596 END TEST nvmf_tcp 00:34:45.596 ************************************ 00:34:45.596 14:21:43 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:34:45.596 14:21:43 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:45.596 14:21:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:45.596 14:21:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:45.596 14:21:43 -- common/autotest_common.sh@10 -- # set +x 00:34:45.856 ************************************ 00:34:45.856 START TEST spdkcli_nvmf_tcp 00:34:45.856 ************************************ 00:34:45.856 14:21:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:45.856 * Looking for test storage... 00:34:45.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:45.856 14:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:45.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.856 --rc genhtml_branch_coverage=1 00:34:45.856 --rc genhtml_function_coverage=1 00:34:45.856 --rc genhtml_legend=1 00:34:45.857 --rc geninfo_all_blocks=1 00:34:45.857 --rc geninfo_unexecuted_blocks=1 00:34:45.857 00:34:45.857 ' 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:45.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.857 --rc genhtml_branch_coverage=1 00:34:45.857 --rc genhtml_function_coverage=1 00:34:45.857 --rc genhtml_legend=1 00:34:45.857 --rc geninfo_all_blocks=1 00:34:45.857 --rc geninfo_unexecuted_blocks=1 00:34:45.857 00:34:45.857 ' 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:45.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.857 --rc genhtml_branch_coverage=1 00:34:45.857 --rc genhtml_function_coverage=1 00:34:45.857 --rc genhtml_legend=1 00:34:45.857 --rc geninfo_all_blocks=1 00:34:45.857 --rc geninfo_unexecuted_blocks=1 00:34:45.857 00:34:45.857 ' 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:45.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.857 --rc genhtml_branch_coverage=1 00:34:45.857 --rc genhtml_function_coverage=1 00:34:45.857 --rc genhtml_legend=1 00:34:45.857 --rc geninfo_all_blocks=1 00:34:45.857 --rc geninfo_unexecuted_blocks=1 00:34:45.857 00:34:45.857 ' 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:45.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1309235 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1309235 00:34:45.857 14:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1309235 ']' 00:34:46.117 14:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:46.117 14:21:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:46.117 14:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:46.117 14:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:46.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:46.117 14:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:46.117 14:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:46.117 [2024-10-30 14:21:44.208831] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:34:46.117 [2024-10-30 14:21:44.208888] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1309235 ] 00:34:46.117 [2024-10-30 14:21:44.290324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:46.117 [2024-10-30 14:21:44.321364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:46.117 [2024-10-30 14:21:44.321365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.061 14:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:47.061 14:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:34:47.061 14:21:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:47.061 14:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:47.061 14:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:47.061 14:21:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:47.061 14:21:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:47.061 14:21:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:47.061 14:21:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:47.061 14:21:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:47.061 14:21:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:47.061 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:47.061 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:47.061 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:47.061 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:47.061 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:47.061 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:47.061 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:47.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:47.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:47.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:47.061 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:47.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:47.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:47.061 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:47.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:47.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:47.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:47.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:47.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:47.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:47.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:47.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:47.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:47.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:47.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:47.061 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:47.061 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:47.061 ' 00:34:49.605 [2024-10-30 14:21:47.749902] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:50.988 [2024-10-30 14:21:49.106214] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:53.527 [2024-10-30 14:21:51.633367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:55.590 [2024-10-30 14:21:53.855691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:57.556 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:57.556 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:57.556 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:57.556 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:57.556 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:57.556 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:57.556 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:57.556 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:57.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:57.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:57.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:57.556 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:57.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:57.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:57.556 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:57.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:57.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:57.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:57.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:57.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:57.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:57.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:57.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:57.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:57.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:57.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:57.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:57.556 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:57.556 14:21:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:57.557 14:21:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:57.557 14:21:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:57.557 14:21:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:57.557 14:21:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:57.557 14:21:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:57.557 14:21:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:57.557 14:21:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:57.816 14:21:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:57.816 14:21:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:57.816 14:21:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:57.816 14:21:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:57.816 14:21:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:58.076 14:21:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:58.077 14:21:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:58.077 14:21:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:58.077 14:21:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:58.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:58.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:58.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:58.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:58.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:58.077 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:58.077 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:58.077 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:58.077 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:58.077 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:58.077 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:58.077 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:58.077 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:58.077 ' 00:35:04.660 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:04.660 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:04.660 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:04.660 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:04.660 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:04.660 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:04.660 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:04.660 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:04.660 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:04.660 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:04.660 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:04.660 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:04.660 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:04.660 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:04.660 14:22:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:04.660 14:22:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:04.660 14:22:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:04.660 14:22:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1309235 00:35:04.660 14:22:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1309235 ']' 00:35:04.660 14:22:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1309235 00:35:04.660 14:22:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:35:04.660 14:22:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:04.660 14:22:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1309235 00:35:04.660 14:22:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:04.660 14:22:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:04.660 14:22:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1309235' 00:35:04.660 killing process with pid 1309235 00:35:04.660 14:22:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1309235 00:35:04.660 14:22:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1309235 00:35:04.660 14:22:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:04.660 14:22:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:04.660 14:22:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1309235 ']' 00:35:04.660 14:22:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1309235 00:35:04.660 14:22:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1309235 ']' 00:35:04.660 14:22:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1309235 00:35:04.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1309235) - No such process 00:35:04.660 14:22:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1309235 is not found' 00:35:04.660 Process with pid 1309235 is not found 00:35:04.660 14:22:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:04.660 14:22:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:04.660 14:22:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:04.660 00:35:04.660 real 0m18.106s 00:35:04.660 user 0m40.277s 00:35:04.660 sys 0m0.821s 00:35:04.660 14:22:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:04.660 14:22:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:04.660 ************************************ 00:35:04.660 END TEST spdkcli_nvmf_tcp 00:35:04.660 ************************************ 00:35:04.660 14:22:02 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:04.660 14:22:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:04.660 14:22:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:04.660 14:22:02 -- common/autotest_common.sh@10 -- # set +x 00:35:04.660 ************************************ 00:35:04.660 START TEST nvmf_identify_passthru 00:35:04.660 ************************************ 00:35:04.660 14:22:02 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:04.660 * Looking for test storage... 00:35:04.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:04.660 14:22:02 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:04.661 14:22:02 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:35:04.661 14:22:02 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:04.661 14:22:02 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:04.661 14:22:02 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:04.661 14:22:02 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:04.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.661 --rc genhtml_branch_coverage=1 00:35:04.661 --rc genhtml_function_coverage=1 00:35:04.661 --rc genhtml_legend=1 00:35:04.661 --rc geninfo_all_blocks=1 00:35:04.661 --rc geninfo_unexecuted_blocks=1 00:35:04.661 00:35:04.661 ' 00:35:04.661 14:22:02 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:04.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.661 --rc genhtml_branch_coverage=1 00:35:04.661 --rc genhtml_function_coverage=1 00:35:04.661 --rc genhtml_legend=1 00:35:04.661 --rc geninfo_all_blocks=1 00:35:04.661 --rc geninfo_unexecuted_blocks=1 00:35:04.661 00:35:04.661 ' 00:35:04.661 14:22:02 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:04.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.661 --rc genhtml_branch_coverage=1 00:35:04.661 --rc genhtml_function_coverage=1 00:35:04.661 --rc genhtml_legend=1 00:35:04.661 --rc geninfo_all_blocks=1 00:35:04.661 --rc geninfo_unexecuted_blocks=1 00:35:04.661 00:35:04.661 ' 00:35:04.661 14:22:02 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:04.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.661 --rc genhtml_branch_coverage=1 00:35:04.661 --rc genhtml_function_coverage=1 00:35:04.661 --rc genhtml_legend=1 00:35:04.661 --rc geninfo_all_blocks=1 00:35:04.661 --rc geninfo_unexecuted_blocks=1 00:35:04.661 00:35:04.661 ' 00:35:04.661 14:22:02 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:04.661 14:22:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.661 14:22:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.661 14:22:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.661 14:22:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:04.661 14:22:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:04.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:04.661 14:22:02 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:04.661 14:22:02 nvmf_identify_passthru -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:04.661 14:22:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.661 14:22:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.661 14:22:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.661 14:22:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:04.661 14:22:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.661 14:22:02 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:04.661 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:04.662 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:04.662 14:22:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:04.662 14:22:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.662 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:04.662 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:04.662 14:22:02 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:04.662 14:22:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:11.253 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:11.253 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:11.253 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:11.253 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:11.253 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:11.254 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:11.254 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:11.254 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:11.254 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:11.254 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:11.254 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:11.254 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:11.515 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:11.515 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:11.515 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:11.515 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:11.515 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:11.515 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:11.515 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:11.515 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:11.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:11.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:35:11.515 00:35:11.515 --- 10.0.0.2 ping statistics --- 00:35:11.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:11.515 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:35:11.515 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:11.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:11.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:35:11.515 00:35:11.515 --- 10.0.0.1 ping statistics --- 00:35:11.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:11.515 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:35:11.515 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:11.515 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:35:11.515 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:11.515 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:11.515 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:11.515 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:11.515 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:11.515 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:11.515 14:22:09 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:11.515 14:22:09 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:11.515 14:22:09 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:11.515 14:22:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.515 14:22:09 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:11.515 14:22:09 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:35:11.515 14:22:09 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:35:11.515 14:22:09 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:35:11.515 14:22:09 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:35:11.515 14:22:09 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:35:11.515 14:22:09 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:35:11.515 14:22:09 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:11.515 14:22:09 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:11.515 14:22:09 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:35:11.776 14:22:09 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:35:11.776 14:22:09 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:35:11.776 14:22:09 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:35:11.776 14:22:09 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:11.776 14:22:09 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:11.776 14:22:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:11.776 14:22:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:11.776 14:22:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:12.347 14:22:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:35:12.347 14:22:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:12.347 14:22:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:12.347 14:22:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:12.609 14:22:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:35:12.609 14:22:10 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:12.609 14:22:10 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:12.609 14:22:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:12.870 14:22:10 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:12.870 14:22:10 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:12.870 14:22:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:12.870 14:22:10 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1316650 00:35:12.870 14:22:10 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:12.870 14:22:10 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:12.870 14:22:10 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1316650 00:35:12.870 14:22:10 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1316650 ']' 00:35:12.870 14:22:10 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:12.870 14:22:10 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:12.870 14:22:10 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:12.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:12.870 14:22:10 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:12.870 14:22:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:12.870 [2024-10-30 14:22:11.005338] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:35:12.870 [2024-10-30 14:22:11.005406] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:12.870 [2024-10-30 14:22:11.104170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:12.870 [2024-10-30 14:22:11.158388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:12.870 [2024-10-30 14:22:11.158441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:12.870 [2024-10-30 14:22:11.158450] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:12.870 [2024-10-30 14:22:11.158457] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:12.870 [2024-10-30 14:22:11.158464] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:12.870 [2024-10-30 14:22:11.160541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.870 [2024-10-30 14:22:11.160695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:12.870 [2024-10-30 14:22:11.160856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:12.870 [2024-10-30 14:22:11.160856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:13.811 14:22:11 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:13.811 14:22:11 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:35:13.811 14:22:11 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:13.811 14:22:11 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.811 14:22:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:13.811 INFO: Log level set to 20 00:35:13.811 INFO: Requests: 00:35:13.811 { 00:35:13.811 "jsonrpc": "2.0", 00:35:13.811 "method": "nvmf_set_config", 00:35:13.811 "id": 1, 00:35:13.811 "params": { 00:35:13.811 "admin_cmd_passthru": { 00:35:13.811 "identify_ctrlr": true 00:35:13.811 } 00:35:13.811 } 00:35:13.811 } 00:35:13.811 00:35:13.811 INFO: response: 00:35:13.811 { 00:35:13.811 "jsonrpc": "2.0", 00:35:13.811 "id": 1, 00:35:13.811 "result": true 00:35:13.811 } 00:35:13.811 00:35:13.811 14:22:11 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.811 14:22:11 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:13.811 14:22:11 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.811 14:22:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:13.811 INFO: Setting log level to 20 00:35:13.811 INFO: Setting log level to 20 00:35:13.811 INFO: Log level set to 20 00:35:13.811 INFO: Log level set to 20 00:35:13.811 INFO: Requests: 00:35:13.811 { 00:35:13.811 "jsonrpc": "2.0", 00:35:13.811 "method": "framework_start_init", 00:35:13.811 "id": 1 00:35:13.811 } 00:35:13.811 00:35:13.811 INFO: Requests: 00:35:13.811 { 00:35:13.811 "jsonrpc": "2.0", 00:35:13.811 "method": "framework_start_init", 00:35:13.811 "id": 1 00:35:13.811 } 00:35:13.811 00:35:13.811 [2024-10-30 14:22:11.893947] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:13.811 INFO: response: 00:35:13.811 { 00:35:13.811 "jsonrpc": "2.0", 00:35:13.811 "id": 1, 00:35:13.811 "result": true 00:35:13.811 } 00:35:13.811 00:35:13.811 INFO: response: 00:35:13.811 { 00:35:13.811 "jsonrpc": "2.0", 00:35:13.811 "id": 1, 00:35:13.811 "result": true 00:35:13.811 } 00:35:13.811 00:35:13.811 14:22:11 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.811 14:22:11 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:13.811 14:22:11 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.811 14:22:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:13.811 INFO: Setting log level to 40 00:35:13.811 INFO: Setting log level to 40 00:35:13.811 INFO: Setting log level to 40 00:35:13.811 [2024-10-30 14:22:11.907271] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:13.811 14:22:11 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.811 14:22:11 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:13.811 14:22:11 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:13.811 14:22:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:13.811 14:22:11 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:35:13.811 14:22:11 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.811 14:22:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.072 Nvme0n1 00:35:14.072 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.072 14:22:12 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:14.072 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.072 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.072 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.072 14:22:12 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:14.072 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.072 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.072 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.072 14:22:12 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:14.072 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.072 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.072 [2024-10-30 14:22:12.308912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:14.072 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.072 14:22:12 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:14.072 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.072 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.072 [ 00:35:14.072 { 00:35:14.072 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:14.072 "subtype": "Discovery", 00:35:14.072 "listen_addresses": [], 00:35:14.072 "allow_any_host": true, 00:35:14.072 "hosts": [] 00:35:14.072 }, 00:35:14.072 { 00:35:14.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:14.072 "subtype": "NVMe", 00:35:14.072 "listen_addresses": [ 00:35:14.072 { 00:35:14.072 "trtype": "TCP", 00:35:14.072 "adrfam": "IPv4", 00:35:14.072 "traddr": "10.0.0.2", 00:35:14.072 "trsvcid": "4420" 00:35:14.072 } 00:35:14.072 ], 00:35:14.072 "allow_any_host": true, 00:35:14.072 "hosts": [], 00:35:14.072 "serial_number": "SPDK00000000000001", 00:35:14.072 "model_number": "SPDK bdev Controller", 00:35:14.072 "max_namespaces": 1, 00:35:14.072 "min_cntlid": 1, 00:35:14.072 "max_cntlid": 65519, 00:35:14.072 "namespaces": [ 00:35:14.072 { 00:35:14.072 "nsid": 1, 00:35:14.072 "bdev_name": "Nvme0n1", 00:35:14.072 "name": "Nvme0n1", 00:35:14.072 "nguid": "36344730526054870025384500000044", 00:35:14.072 "uuid": "36344730-5260-5487-0025-384500000044" 00:35:14.072 } 00:35:14.072 ] 00:35:14.072 } 00:35:14.072 ] 00:35:14.072 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.072 14:22:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:14.072 14:22:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:14.072 14:22:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:14.332 14:22:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:35:14.332 14:22:12 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:14.332 14:22:12 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:14.332 14:22:12 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:14.593 14:22:12 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:35:14.593 14:22:12 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:35:14.593 14:22:12 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:35:14.593 14:22:12 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:14.593 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.593 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.593 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.593 14:22:12 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:14.593 14:22:12 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:14.593 14:22:12 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:14.593 14:22:12 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:14.593 14:22:12 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:14.593 14:22:12 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:14.593 14:22:12 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:14.593 14:22:12 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:14.593 rmmod nvme_tcp 00:35:14.593 rmmod nvme_fabrics 00:35:14.593 rmmod nvme_keyring 00:35:14.593 14:22:12 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:14.593 14:22:12 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:14.593 14:22:12 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:14.593 14:22:12 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1316650 ']' 00:35:14.593 14:22:12 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1316650 00:35:14.593 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1316650 ']' 00:35:14.593 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1316650 00:35:14.593 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:35:14.593 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:14.593 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1316650 00:35:14.593 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:14.593 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:14.593 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1316650' 00:35:14.593 killing process with pid 1316650 00:35:14.593 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1316650 00:35:14.593 14:22:12 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1316650 00:35:15.164 14:22:13 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:15.164 14:22:13 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:15.164 14:22:13 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:15.164 14:22:13 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:15.164 14:22:13 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:35:15.164 14:22:13 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:15.164 14:22:13 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:35:15.164 14:22:13 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:15.164 14:22:13 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:15.164 14:22:13 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:15.164 14:22:13 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:15.164 14:22:13 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:17.075 14:22:15 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:17.075 00:35:17.075 real 0m13.141s 00:35:17.075 user 0m10.243s 00:35:17.075 sys 0m6.656s 00:35:17.075 14:22:15 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:17.075 14:22:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.075 ************************************ 00:35:17.075 END TEST nvmf_identify_passthru 00:35:17.075 ************************************ 00:35:17.075 14:22:15 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:17.075 14:22:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:17.075 14:22:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:17.075 14:22:15 -- common/autotest_common.sh@10 -- # set +x 00:35:17.075 ************************************ 00:35:17.075 START TEST nvmf_dif 00:35:17.075 ************************************ 00:35:17.075 14:22:15 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:17.336 * Looking for test storage... 00:35:17.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:17.336 14:22:15 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:17.337 14:22:15 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:35:17.337 14:22:15 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:17.337 14:22:15 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:17.337 14:22:15 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:17.337 14:22:15 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:17.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.337 --rc genhtml_branch_coverage=1 00:35:17.337 --rc genhtml_function_coverage=1 00:35:17.337 --rc genhtml_legend=1 00:35:17.337 --rc geninfo_all_blocks=1 00:35:17.337 --rc geninfo_unexecuted_blocks=1 00:35:17.337 00:35:17.337 ' 00:35:17.337 14:22:15 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:17.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.337 --rc genhtml_branch_coverage=1 00:35:17.337 --rc genhtml_function_coverage=1 00:35:17.337 --rc genhtml_legend=1 00:35:17.337 --rc geninfo_all_blocks=1 00:35:17.337 --rc geninfo_unexecuted_blocks=1 00:35:17.337 00:35:17.337 ' 00:35:17.337 14:22:15 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:17.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.337 --rc genhtml_branch_coverage=1 00:35:17.337 --rc genhtml_function_coverage=1 00:35:17.337 --rc genhtml_legend=1 00:35:17.337 --rc geninfo_all_blocks=1 00:35:17.337 --rc geninfo_unexecuted_blocks=1 00:35:17.337 00:35:17.337 ' 00:35:17.337 14:22:15 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:17.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.337 --rc genhtml_branch_coverage=1 00:35:17.337 --rc genhtml_function_coverage=1 00:35:17.337 --rc genhtml_legend=1 00:35:17.337 --rc geninfo_all_blocks=1 00:35:17.337 --rc geninfo_unexecuted_blocks=1 00:35:17.337 00:35:17.337 ' 00:35:17.337 14:22:15 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:17.337 14:22:15 nvmf_dif -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:17.337 14:22:15 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.337 14:22:15 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.337 14:22:15 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.337 14:22:15 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:17.337 14:22:15 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:17.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:17.337 14:22:15 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:17.337 14:22:15 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:17.337 14:22:15 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:17.337 14:22:15 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:17.337 14:22:15 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:17.337 14:22:15 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:17.337 14:22:15 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:17.337 14:22:15 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:17.337 14:22:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:25.479 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:25.479 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:25.479 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:25.479 14:22:22 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:25.480 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:25.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:25.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:35:25.480 00:35:25.480 --- 10.0.0.2 ping statistics --- 00:35:25.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:25.480 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:25.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:25.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:35:25.480 00:35:25.480 --- 10.0.0.1 ping statistics --- 00:35:25.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:25.480 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:25.480 14:22:22 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:28.026 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:28.026 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:28.026 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:28.026 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:28.026 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:28.026 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:28.026 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:28.026 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:28.026 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:28.026 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:35:28.026 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:28.026 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:28.026 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:28.026 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:28.026 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:28.026 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:28.026 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:28.287 14:22:26 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:28.287 14:22:26 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:28.287 14:22:26 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:28.287 14:22:26 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:28.287 14:22:26 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:28.287 14:22:26 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:28.287 14:22:26 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:28.287 14:22:26 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:28.287 14:22:26 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:28.287 14:22:26 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:28.287 14:22:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:28.287 14:22:26 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1322637 00:35:28.287 14:22:26 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1322637 00:35:28.287 14:22:26 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:28.287 14:22:26 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1322637 ']' 00:35:28.287 14:22:26 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:28.287 14:22:26 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:28.287 14:22:26 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:28.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:28.287 14:22:26 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:28.287 14:22:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:28.287 [2024-10-30 14:22:26.447590] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:35:28.287 [2024-10-30 14:22:26.447643] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:28.287 [2024-10-30 14:22:26.540934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:28.287 [2024-10-30 14:22:26.576512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:28.287 [2024-10-30 14:22:26.576543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:28.287 [2024-10-30 14:22:26.576551] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:28.287 [2024-10-30 14:22:26.576557] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:28.287 [2024-10-30 14:22:26.576563] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:28.287 [2024-10-30 14:22:26.577151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:29.230 14:22:27 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:29.230 14:22:27 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:35:29.230 14:22:27 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:29.230 14:22:27 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:29.230 14:22:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:29.230 14:22:27 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:29.230 14:22:27 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:29.230 14:22:27 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:29.230 14:22:27 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.230 14:22:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:29.230 [2024-10-30 14:22:27.277710] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:29.230 14:22:27 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.230 14:22:27 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:29.230 14:22:27 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:29.230 14:22:27 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:29.230 14:22:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:29.230 ************************************ 00:35:29.230 START TEST fio_dif_1_default 00:35:29.230 ************************************ 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:29.230 bdev_null0 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:29.230 [2024-10-30 14:22:27.366083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:29.230 { 00:35:29.230 "params": { 00:35:29.230 "name": "Nvme$subsystem", 00:35:29.230 "trtype": "$TEST_TRANSPORT", 00:35:29.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:29.230 "adrfam": "ipv4", 00:35:29.230 "trsvcid": "$NVMF_PORT", 00:35:29.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:29.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:29.230 "hdgst": ${hdgst:-false}, 00:35:29.230 "ddgst": ${ddgst:-false} 00:35:29.230 }, 00:35:29.230 "method": "bdev_nvme_attach_controller" 00:35:29.230 } 00:35:29.230 EOF 00:35:29.230 )") 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:29.230 "params": { 00:35:29.230 "name": "Nvme0", 00:35:29.230 "trtype": "tcp", 00:35:29.230 "traddr": "10.0.0.2", 00:35:29.230 "adrfam": "ipv4", 00:35:29.230 "trsvcid": "4420", 00:35:29.230 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:29.230 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:29.230 "hdgst": false, 00:35:29.230 "ddgst": false 00:35:29.230 }, 00:35:29.230 "method": "bdev_nvme_attach_controller" 00:35:29.230 }' 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:29.230 14:22:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:29.801 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:29.801 fio-3.35 00:35:29.801 Starting 1 thread 00:35:42.033 00:35:42.033 filename0: (groupid=0, jobs=1): err= 0: pid=1323167: Wed Oct 30 14:22:38 2024 00:35:42.033 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10037msec) 00:35:42.033 slat (nsec): min=5401, max=37819, avg=6174.46, stdev=1717.05 00:35:42.033 clat (usec): min=40950, max=42780, avg=41977.93, stdev=109.12 00:35:42.033 lat (usec): min=40958, max=42818, avg=41984.11, stdev=109.32 00:35:42.033 clat percentiles (usec): 00:35:42.033 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:35:42.033 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:35:42.033 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:42.033 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:35:42.033 | 99.99th=[42730] 00:35:42.033 bw ( KiB/s): min= 352, max= 384, per=99.74%, avg=380.80, stdev= 9.85, samples=20 00:35:42.033 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:35:42.033 lat (msec) : 50=100.00% 00:35:42.033 cpu : usr=93.59%, sys=6.19%, ctx=7, majf=0, minf=231 00:35:42.033 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.033 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.033 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:42.033 00:35:42.034 Run status group 0 (all jobs): 00:35:42.034 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3824KiB (3916kB), run=10037-10037msec 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.034 00:35:42.034 real 0m11.191s 00:35:42.034 user 0m24.908s 00:35:42.034 sys 0m0.987s 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:42.034 ************************************ 00:35:42.034 END TEST fio_dif_1_default 00:35:42.034 ************************************ 00:35:42.034 14:22:38 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:42.034 14:22:38 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:42.034 14:22:38 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:42.034 14:22:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:42.034 ************************************ 00:35:42.034 START TEST fio_dif_1_multi_subsystems 00:35:42.034 ************************************ 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.034 bdev_null0 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.034 [2024-10-30 14:22:38.635321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.034 bdev_null1 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:42.034 { 00:35:42.034 "params": { 00:35:42.034 "name": "Nvme$subsystem", 00:35:42.034 "trtype": "$TEST_TRANSPORT", 00:35:42.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.034 "adrfam": "ipv4", 00:35:42.034 "trsvcid": "$NVMF_PORT", 00:35:42.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.034 "hdgst": ${hdgst:-false}, 00:35:42.034 "ddgst": ${ddgst:-false} 00:35:42.034 }, 00:35:42.034 "method": "bdev_nvme_attach_controller" 00:35:42.034 } 00:35:42.034 EOF 00:35:42.034 )") 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:42.034 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:42.035 { 00:35:42.035 "params": { 00:35:42.035 "name": "Nvme$subsystem", 00:35:42.035 "trtype": "$TEST_TRANSPORT", 00:35:42.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.035 "adrfam": "ipv4", 00:35:42.035 "trsvcid": "$NVMF_PORT", 00:35:42.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.035 "hdgst": ${hdgst:-false}, 00:35:42.035 "ddgst": ${ddgst:-false} 00:35:42.035 }, 00:35:42.035 "method": "bdev_nvme_attach_controller" 00:35:42.035 } 00:35:42.035 EOF 00:35:42.035 )") 00:35:42.035 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:42.035 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:42.035 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:42.035 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:35:42.035 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:35:42.035 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:42.035 "params": { 00:35:42.035 "name": "Nvme0", 00:35:42.035 "trtype": "tcp", 00:35:42.035 "traddr": "10.0.0.2", 00:35:42.035 "adrfam": "ipv4", 00:35:42.035 "trsvcid": "4420", 00:35:42.035 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:42.035 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:42.035 "hdgst": false, 00:35:42.035 "ddgst": false 00:35:42.035 }, 00:35:42.035 "method": "bdev_nvme_attach_controller" 00:35:42.035 },{ 00:35:42.035 "params": { 00:35:42.035 "name": "Nvme1", 00:35:42.035 "trtype": "tcp", 00:35:42.035 "traddr": "10.0.0.2", 00:35:42.035 "adrfam": "ipv4", 00:35:42.035 "trsvcid": "4420", 00:35:42.035 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:42.035 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:42.035 "hdgst": false, 00:35:42.035 "ddgst": false 00:35:42.035 }, 00:35:42.035 "method": "bdev_nvme_attach_controller" 00:35:42.035 }' 00:35:42.035 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:42.035 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:42.035 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:42.035 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.035 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:42.035 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:42.035 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:42.035 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:42.035 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:42.035 14:22:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.035 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:42.035 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:42.035 fio-3.35 00:35:42.035 Starting 2 threads 00:35:52.038 00:35:52.038 filename0: (groupid=0, jobs=1): err= 0: pid=1325566: Wed Oct 30 14:22:49 2024 00:35:52.038 read: IOPS=190, BW=762KiB/s (780kB/s)(7616KiB/10001msec) 00:35:52.038 slat (nsec): min=5401, max=59564, avg=6262.37, stdev=1851.22 00:35:52.038 clat (usec): min=439, max=42254, avg=20991.34, stdev=20190.60 00:35:52.038 lat (usec): min=446, max=42287, avg=20997.60, stdev=20190.51 00:35:52.038 clat percentiles (usec): 00:35:52.038 | 1.00th=[ 578], 5.00th=[ 701], 10.00th=[ 783], 20.00th=[ 807], 00:35:52.038 | 30.00th=[ 824], 40.00th=[ 848], 50.00th=[ 1090], 60.00th=[41157], 00:35:52.038 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:52.038 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:35:52.038 | 99.99th=[42206] 00:35:52.038 bw ( KiB/s): min= 704, max= 768, per=66.14%, avg=761.26, stdev=20.18, samples=19 00:35:52.038 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:35:52.038 lat (usec) : 500=0.21%, 750=6.14%, 1000=42.80% 00:35:52.038 lat (msec) : 2=0.84%, 50=50.00% 00:35:52.038 cpu : usr=95.61%, sys=4.17%, ctx=13, majf=0, minf=180 00:35:52.038 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:52.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.038 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.038 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:52.038 filename1: (groupid=0, jobs=1): err= 0: pid=1325567: Wed Oct 30 14:22:49 2024 00:35:52.038 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10012msec) 00:35:52.038 slat (nsec): min=5396, max=39493, avg=5804.07, stdev=1475.58 00:35:52.038 clat (usec): min=40835, max=42361, avg=41014.12, stdev=180.35 00:35:52.039 lat (usec): min=40840, max=42400, avg=41019.92, stdev=180.94 00:35:52.039 clat percentiles (usec): 00:35:52.039 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:52.039 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:52.039 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:52.039 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:52.039 | 99.99th=[42206] 00:35:52.039 bw ( KiB/s): min= 384, max= 416, per=33.72%, avg=388.80, stdev=11.72, samples=20 00:35:52.039 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:52.039 lat (msec) : 50=100.00% 00:35:52.039 cpu : usr=95.71%, sys=4.08%, ctx=14, majf=0, minf=155 00:35:52.039 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:52.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.039 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.039 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.039 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:52.039 00:35:52.039 Run status group 0 (all jobs): 00:35:52.039 READ: bw=1151KiB/s (1178kB/s), 390KiB/s-762KiB/s (399kB/s-780kB/s), io=11.2MiB (11.8MB), run=10001-10012msec 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.039 00:35:52.039 real 0m11.564s 00:35:52.039 user 0m35.306s 00:35:52.039 sys 0m1.188s 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:52.039 14:22:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:52.039 ************************************ 00:35:52.039 END TEST fio_dif_1_multi_subsystems 00:35:52.039 ************************************ 00:35:52.039 14:22:50 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:52.039 14:22:50 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:52.039 14:22:50 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:52.039 14:22:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:52.039 ************************************ 00:35:52.039 START TEST fio_dif_rand_params 00:35:52.039 ************************************ 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.039 bdev_null0 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.039 [2024-10-30 14:22:50.283650] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:52.039 { 00:35:52.039 "params": { 00:35:52.039 "name": "Nvme$subsystem", 00:35:52.039 "trtype": "$TEST_TRANSPORT", 00:35:52.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:52.039 "adrfam": "ipv4", 00:35:52.039 "trsvcid": "$NVMF_PORT", 00:35:52.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:52.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:52.039 "hdgst": ${hdgst:-false}, 00:35:52.039 "ddgst": ${ddgst:-false} 00:35:52.039 }, 00:35:52.039 "method": "bdev_nvme_attach_controller" 00:35:52.039 } 00:35:52.039 EOF 00:35:52.039 )") 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:52.039 14:22:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:52.039 "params": { 00:35:52.039 "name": "Nvme0", 00:35:52.039 "trtype": "tcp", 00:35:52.039 "traddr": "10.0.0.2", 00:35:52.039 "adrfam": "ipv4", 00:35:52.039 "trsvcid": "4420", 00:35:52.039 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:52.039 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:52.039 "hdgst": false, 00:35:52.039 "ddgst": false 00:35:52.039 }, 00:35:52.040 "method": "bdev_nvme_attach_controller" 00:35:52.040 }' 00:35:52.040 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:52.040 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:52.040 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:52.040 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:52.040 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:52.323 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:52.323 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:52.323 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:52.323 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:52.323 14:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:52.590 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:52.590 ... 00:35:52.590 fio-3.35 00:35:52.590 Starting 3 threads 00:35:59.173 00:35:59.173 filename0: (groupid=0, jobs=1): err= 0: pid=1327754: Wed Oct 30 14:22:56 2024 00:35:59.173 read: IOPS=318, BW=39.9MiB/s (41.8MB/s)(201MiB/5046msec) 00:35:59.173 slat (nsec): min=5475, max=37184, avg=7350.48, stdev=1632.08 00:35:59.173 clat (usec): min=5106, max=50718, avg=9370.76, stdev=5323.05 00:35:59.173 lat (usec): min=5114, max=50724, avg=9378.11, stdev=5323.28 00:35:59.173 clat percentiles (usec): 00:35:59.173 | 1.00th=[ 5604], 5.00th=[ 6456], 10.00th=[ 6980], 20.00th=[ 7635], 00:35:59.173 | 30.00th=[ 8094], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9110], 00:35:59.173 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[10159], 95.00th=[10552], 00:35:59.173 | 99.00th=[47449], 99.50th=[47973], 99.90th=[49546], 99.95th=[50594], 00:35:59.173 | 99.99th=[50594] 00:35:59.173 bw ( KiB/s): min=30976, max=46848, per=33.64%, avg=41139.20, stdev=4309.21, samples=10 00:35:59.173 iops : min= 242, max= 366, avg=321.40, stdev=33.67, samples=10 00:35:59.173 lat (msec) : 10=87.07%, 20=11.12%, 50=1.74%, 100=0.06% 00:35:59.173 cpu : usr=93.30%, sys=6.42%, ctx=11, majf=0, minf=53 00:35:59.173 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:59.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.173 issued rwts: total=1609,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.173 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:59.173 filename0: (groupid=0, jobs=1): err= 0: pid=1327755: Wed Oct 30 14:22:56 2024 00:35:59.173 read: IOPS=313, BW=39.1MiB/s (41.0MB/s)(198MiB/5046msec) 00:35:59.173 slat (nsec): min=5487, max=35903, avg=7225.83, stdev=1825.60 00:35:59.173 clat (usec): min=4314, max=91349, avg=9544.70, stdev=7085.00 00:35:59.173 lat (usec): min=4322, max=91358, avg=9551.92, stdev=7085.04 00:35:59.173 clat percentiles (usec): 00:35:59.173 | 1.00th=[ 4948], 5.00th=[ 6128], 10.00th=[ 6783], 20.00th=[ 7439], 00:35:59.173 | 30.00th=[ 7832], 40.00th=[ 8291], 50.00th=[ 8717], 60.00th=[ 9110], 00:35:59.173 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[10814], 00:35:59.173 | 99.00th=[49021], 99.50th=[49546], 99.90th=[88605], 99.95th=[91751], 00:35:59.173 | 99.99th=[91751] 00:35:59.173 bw ( KiB/s): min=29952, max=45312, per=33.01%, avg=40371.20, stdev=4870.81, samples=10 00:35:59.173 iops : min= 234, max= 354, avg=315.40, stdev=38.05, samples=10 00:35:59.173 lat (msec) : 10=85.19%, 20=12.47%, 50=1.90%, 100=0.44% 00:35:59.173 cpu : usr=93.52%, sys=6.22%, ctx=8, majf=0, minf=116 00:35:59.173 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:59.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.173 issued rwts: total=1580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.173 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:59.173 filename0: (groupid=0, jobs=1): err= 0: pid=1327756: Wed Oct 30 14:22:56 2024 00:35:59.173 read: IOPS=323, BW=40.4MiB/s (42.4MB/s)(204MiB/5046msec) 00:35:59.173 slat (nsec): min=5467, max=44740, avg=7613.72, stdev=1939.81 00:35:59.173 clat (usec): min=3707, max=90250, avg=9239.00, stdev=4464.70 00:35:59.173 lat (usec): min=3716, max=90258, avg=9246.62, stdev=4464.92 00:35:59.173 clat percentiles (usec): 00:35:59.173 | 1.00th=[ 4817], 5.00th=[ 5932], 10.00th=[ 6915], 20.00th=[ 7701], 00:35:59.173 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9372], 00:35:59.173 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10552], 95.00th=[10945], 00:35:59.173 | 99.00th=[12387], 99.50th=[47449], 99.90th=[50070], 99.95th=[90702], 00:35:59.173 | 99.99th=[90702] 00:35:59.173 bw ( KiB/s): min=35584, max=47872, per=34.11%, avg=41710.50, stdev=3100.18, samples=10 00:35:59.173 iops : min= 278, max= 374, avg=325.80, stdev=24.25, samples=10 00:35:59.173 lat (msec) : 4=0.18%, 10=78.62%, 20=20.22%, 50=0.92%, 100=0.06% 00:35:59.173 cpu : usr=93.52%, sys=6.20%, ctx=13, majf=0, minf=141 00:35:59.173 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:59.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.173 issued rwts: total=1632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.173 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:59.173 00:35:59.173 Run status group 0 (all jobs): 00:35:59.173 READ: bw=119MiB/s (125MB/s), 39.1MiB/s-40.4MiB/s (41.0MB/s-42.4MB/s), io=603MiB (632MB), run=5046-5046msec 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.173 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.173 bdev_null0 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.174 [2024-10-30 14:22:56.446337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.174 bdev_null1 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.174 bdev_null2 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:59.174 { 00:35:59.174 "params": { 00:35:59.174 "name": "Nvme$subsystem", 00:35:59.174 "trtype": "$TEST_TRANSPORT", 00:35:59.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:59.174 "adrfam": "ipv4", 00:35:59.174 "trsvcid": "$NVMF_PORT", 00:35:59.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:59.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:59.174 "hdgst": ${hdgst:-false}, 00:35:59.174 "ddgst": ${ddgst:-false} 00:35:59.174 }, 00:35:59.174 "method": "bdev_nvme_attach_controller" 00:35:59.174 } 00:35:59.174 EOF 00:35:59.174 )") 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:59.174 { 00:35:59.174 "params": { 00:35:59.174 "name": "Nvme$subsystem", 00:35:59.174 "trtype": "$TEST_TRANSPORT", 00:35:59.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:59.174 "adrfam": "ipv4", 00:35:59.174 "trsvcid": "$NVMF_PORT", 00:35:59.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:59.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:59.174 "hdgst": ${hdgst:-false}, 00:35:59.174 "ddgst": ${ddgst:-false} 00:35:59.174 }, 00:35:59.174 "method": "bdev_nvme_attach_controller" 00:35:59.174 } 00:35:59.174 EOF 00:35:59.174 )") 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:59.174 { 00:35:59.174 "params": { 00:35:59.174 "name": "Nvme$subsystem", 00:35:59.174 "trtype": "$TEST_TRANSPORT", 00:35:59.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:59.174 "adrfam": "ipv4", 00:35:59.174 "trsvcid": "$NVMF_PORT", 00:35:59.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:59.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:59.174 "hdgst": ${hdgst:-false}, 00:35:59.174 "ddgst": ${ddgst:-false} 00:35:59.174 }, 00:35:59.174 "method": "bdev_nvme_attach_controller" 00:35:59.174 } 00:35:59.174 EOF 00:35:59.174 )") 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:59.174 14:22:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:59.174 "params": { 00:35:59.174 "name": "Nvme0", 00:35:59.174 "trtype": "tcp", 00:35:59.174 "traddr": "10.0.0.2", 00:35:59.174 "adrfam": "ipv4", 00:35:59.174 "trsvcid": "4420", 00:35:59.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:59.174 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:59.174 "hdgst": false, 00:35:59.174 "ddgst": false 00:35:59.174 }, 00:35:59.174 "method": "bdev_nvme_attach_controller" 00:35:59.174 },{ 00:35:59.174 "params": { 00:35:59.174 "name": "Nvme1", 00:35:59.174 "trtype": "tcp", 00:35:59.174 "traddr": "10.0.0.2", 00:35:59.174 "adrfam": "ipv4", 00:35:59.174 "trsvcid": "4420", 00:35:59.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:59.175 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:59.175 "hdgst": false, 00:35:59.175 "ddgst": false 00:35:59.175 }, 00:35:59.175 "method": "bdev_nvme_attach_controller" 00:35:59.175 },{ 00:35:59.175 "params": { 00:35:59.175 "name": "Nvme2", 00:35:59.175 "trtype": "tcp", 00:35:59.175 "traddr": "10.0.0.2", 00:35:59.175 "adrfam": "ipv4", 00:35:59.175 "trsvcid": "4420", 00:35:59.175 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:59.175 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:59.175 "hdgst": false, 00:35:59.175 "ddgst": false 00:35:59.175 }, 00:35:59.175 "method": "bdev_nvme_attach_controller" 00:35:59.175 }' 00:35:59.175 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:59.175 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:59.175 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:59.175 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:59.175 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:59.175 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:59.175 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:59.175 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:59.175 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:59.175 14:22:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:59.175 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:59.175 ... 00:35:59.175 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:59.175 ... 00:35:59.175 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:59.175 ... 00:35:59.175 fio-3.35 00:35:59.175 Starting 24 threads 00:36:11.417 00:36:11.417 filename0: (groupid=0, jobs=1): err= 0: pid=1329261: Wed Oct 30 14:23:08 2024 00:36:11.417 read: IOPS=691, BW=2767KiB/s (2834kB/s)(27.0MiB/10006msec) 00:36:11.417 slat (nsec): min=5588, max=94765, avg=17579.29, stdev=13184.80 00:36:11.417 clat (usec): min=4328, max=41261, avg=22986.63, stdev=3634.34 00:36:11.417 lat (usec): min=4351, max=41288, avg=23004.21, stdev=3636.20 00:36:11.417 clat percentiles (usec): 00:36:11.417 | 1.00th=[10814], 5.00th=[15008], 10.00th=[17433], 20.00th=[23200], 00:36:11.417 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:11.417 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:36:11.417 | 99.00th=[34341], 99.50th=[37487], 99.90th=[39060], 99.95th=[40633], 00:36:11.417 | 99.99th=[41157] 00:36:11.417 bw ( KiB/s): min= 2554, max= 3168, per=4.29%, avg=2772.74, stdev=180.48, samples=19 00:36:11.417 iops : min= 638, max= 792, avg=693.16, stdev=45.15, samples=19 00:36:11.417 lat (msec) : 10=0.87%, 20=12.50%, 50=86.64% 00:36:11.417 cpu : usr=99.11%, sys=0.61%, ctx=10, majf=0, minf=40 00:36:11.417 IO depths : 1=4.1%, 2=9.4%, 4=22.1%, 8=55.9%, 16=8.4%, 32=0.0%, >=64=0.0% 00:36:11.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.417 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.417 issued rwts: total=6922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.417 filename0: (groupid=0, jobs=1): err= 0: pid=1329262: Wed Oct 30 14:23:08 2024 00:36:11.417 read: IOPS=666, BW=2666KiB/s (2730kB/s)(26.1MiB/10009msec) 00:36:11.417 slat (nsec): min=5648, max=94948, avg=20440.92, stdev=14978.91 00:36:11.417 clat (usec): min=12662, max=29036, avg=23826.94, stdev=874.48 00:36:11.417 lat (usec): min=12672, max=29042, avg=23847.38, stdev=872.68 00:36:11.417 clat percentiles (usec): 00:36:11.417 | 1.00th=[22414], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:36:11.417 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:11.417 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:11.417 | 99.00th=[25035], 99.50th=[25297], 99.90th=[27395], 99.95th=[28967], 00:36:11.417 | 99.99th=[28967] 00:36:11.417 bw ( KiB/s): min= 2554, max= 2688, per=4.13%, avg=2667.16, stdev=48.59, samples=19 00:36:11.417 iops : min= 638, max= 672, avg=666.74, stdev=12.21, samples=19 00:36:11.417 lat (msec) : 20=0.81%, 50=99.19% 00:36:11.417 cpu : usr=97.80%, sys=1.30%, ctx=346, majf=0, minf=51 00:36:11.417 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:11.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.417 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.417 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.417 filename0: (groupid=0, jobs=1): err= 0: pid=1329263: Wed Oct 30 14:23:08 2024 00:36:11.417 read: IOPS=673, BW=2695KiB/s (2760kB/s)(26.3MiB/10010msec) 00:36:11.417 slat (nsec): min=5615, max=93212, avg=20061.52, stdev=15412.03 00:36:11.417 clat (usec): min=10384, max=38936, avg=23569.27, stdev=2441.89 00:36:11.417 lat (usec): min=10393, max=38958, avg=23589.33, stdev=2443.35 00:36:11.417 clat percentiles (usec): 00:36:11.417 | 1.00th=[14877], 5.00th=[17695], 10.00th=[22938], 20.00th=[23462], 00:36:11.417 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:11.417 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:36:11.417 | 99.00th=[32637], 99.50th=[35390], 99.90th=[36963], 99.95th=[39060], 00:36:11.417 | 99.99th=[39060] 00:36:11.417 bw ( KiB/s): min= 2560, max= 3120, per=4.17%, avg=2697.79, stdev=133.77, samples=19 00:36:11.417 iops : min= 640, max= 780, avg=674.42, stdev=33.47, samples=19 00:36:11.417 lat (msec) : 20=6.85%, 50=93.15% 00:36:11.417 cpu : usr=98.06%, sys=1.29%, ctx=222, majf=0, minf=40 00:36:11.417 IO depths : 1=5.0%, 2=10.4%, 4=23.1%, 8=53.9%, 16=7.7%, 32=0.0%, >=64=0.0% 00:36:11.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.417 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.417 issued rwts: total=6744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.417 filename0: (groupid=0, jobs=1): err= 0: pid=1329264: Wed Oct 30 14:23:08 2024 00:36:11.417 read: IOPS=667, BW=2668KiB/s (2732kB/s)(26.1MiB/10005msec) 00:36:11.417 slat (nsec): min=5428, max=91645, avg=25058.10, stdev=15329.09 00:36:11.417 clat (usec): min=5194, max=45144, avg=23753.81, stdev=2491.58 00:36:11.417 lat (usec): min=5200, max=45160, avg=23778.86, stdev=2492.37 00:36:11.417 clat percentiles (usec): 00:36:11.417 | 1.00th=[15008], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:36:11.417 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:11.417 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:11.417 | 99.00th=[33817], 99.50th=[38011], 99.90th=[45351], 99.95th=[45351], 00:36:11.417 | 99.99th=[45351] 00:36:11.417 bw ( KiB/s): min= 2432, max= 2688, per=4.10%, avg=2649.79, stdev=70.23, samples=19 00:36:11.417 iops : min= 608, max= 672, avg=662.42, stdev=17.60, samples=19 00:36:11.417 lat (msec) : 10=0.48%, 20=2.35%, 50=97.17% 00:36:11.417 cpu : usr=98.94%, sys=0.72%, ctx=77, majf=0, minf=51 00:36:11.417 IO depths : 1=5.3%, 2=11.2%, 4=24.1%, 8=52.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:36:11.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.417 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.417 issued rwts: total=6674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.417 filename0: (groupid=0, jobs=1): err= 0: pid=1329265: Wed Oct 30 14:23:08 2024 00:36:11.417 read: IOPS=672, BW=2691KiB/s (2755kB/s)(26.3MiB/10013msec) 00:36:11.417 slat (nsec): min=5585, max=92075, avg=22750.03, stdev=14601.44 00:36:11.417 clat (usec): min=8430, max=37737, avg=23589.21, stdev=2232.43 00:36:11.417 lat (usec): min=8439, max=37746, avg=23611.96, stdev=2234.35 00:36:11.417 clat percentiles (usec): 00:36:11.417 | 1.00th=[14746], 5.00th=[19530], 10.00th=[23200], 20.00th=[23462], 00:36:11.417 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:11.417 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:36:11.417 | 99.00th=[31851], 99.50th=[33817], 99.90th=[36963], 99.95th=[36963], 00:36:11.417 | 99.99th=[37487] 00:36:11.417 bw ( KiB/s): min= 2560, max= 2944, per=4.16%, avg=2688.53, stdev=97.55, samples=19 00:36:11.417 iops : min= 640, max= 736, avg=672.11, stdev=24.39, samples=19 00:36:11.417 lat (msec) : 10=0.04%, 20=5.76%, 50=94.20% 00:36:11.417 cpu : usr=99.05%, sys=0.67%, ctx=12, majf=0, minf=37 00:36:11.417 IO depths : 1=5.0%, 2=10.7%, 4=23.3%, 8=53.5%, 16=7.5%, 32=0.0%, >=64=0.0% 00:36:11.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.417 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.417 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.417 filename0: (groupid=0, jobs=1): err= 0: pid=1329266: Wed Oct 30 14:23:08 2024 00:36:11.417 read: IOPS=665, BW=2663KiB/s (2726kB/s)(26.0MiB/10010msec) 00:36:11.417 slat (usec): min=5, max=100, avg=20.97, stdev=13.75 00:36:11.417 clat (usec): min=10947, max=35909, avg=23847.83, stdev=1064.52 00:36:11.417 lat (usec): min=10953, max=35930, avg=23868.80, stdev=1064.30 00:36:11.417 clat percentiles (usec): 00:36:11.417 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:36:11.417 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:11.417 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:11.417 | 99.00th=[25560], 99.50th=[25822], 99.90th=[35914], 99.95th=[35914], 00:36:11.417 | 99.99th=[35914] 00:36:11.417 bw ( KiB/s): min= 2560, max= 2688, per=4.11%, avg=2660.42, stdev=53.31, samples=19 00:36:11.417 iops : min= 640, max= 672, avg=665.05, stdev=13.31, samples=19 00:36:11.417 lat (msec) : 20=0.50%, 50=99.50% 00:36:11.417 cpu : usr=98.60%, sys=0.94%, ctx=61, majf=0, minf=42 00:36:11.417 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:11.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.417 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.417 issued rwts: total=6663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.417 filename0: (groupid=0, jobs=1): err= 0: pid=1329267: Wed Oct 30 14:23:08 2024 00:36:11.417 read: IOPS=672, BW=2692KiB/s (2756kB/s)(26.3MiB/10004msec) 00:36:11.417 slat (nsec): min=5590, max=96075, avg=23971.05, stdev=15368.87 00:36:11.417 clat (usec): min=5262, max=43544, avg=23546.99, stdev=2122.17 00:36:11.417 lat (usec): min=5280, max=43550, avg=23570.96, stdev=2122.96 00:36:11.417 clat percentiles (usec): 00:36:11.417 | 1.00th=[12518], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:36:11.417 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:11.417 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:11.417 | 99.00th=[25560], 99.50th=[25822], 99.90th=[40633], 99.95th=[40633], 00:36:11.417 | 99.99th=[43779] 00:36:11.417 bw ( KiB/s): min= 2554, max= 3072, per=4.16%, avg=2692.42, stdev=121.55, samples=19 00:36:11.417 iops : min= 638, max= 768, avg=673.05, stdev=30.45, samples=19 00:36:11.417 lat (msec) : 10=0.71%, 20=2.35%, 50=96.94% 00:36:11.418 cpu : usr=98.74%, sys=0.89%, ctx=136, majf=0, minf=59 00:36:11.418 IO depths : 1=6.0%, 2=12.0%, 4=24.3%, 8=51.2%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:11.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.418 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.418 issued rwts: total=6732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.418 filename0: (groupid=0, jobs=1): err= 0: pid=1329268: Wed Oct 30 14:23:08 2024 00:36:11.418 read: IOPS=705, BW=2821KiB/s (2889kB/s)(27.6MiB/10015msec) 00:36:11.418 slat (nsec): min=5488, max=79018, avg=14019.02, stdev=10563.70 00:36:11.418 clat (usec): min=1103, max=36283, avg=22573.62, stdev=4050.04 00:36:11.418 lat (usec): min=1124, max=36291, avg=22587.64, stdev=4050.84 00:36:11.418 clat percentiles (usec): 00:36:11.418 | 1.00th=[ 4146], 5.00th=[14877], 10.00th=[16909], 20.00th=[23200], 00:36:11.418 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:11.418 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:11.418 | 99.00th=[31851], 99.50th=[33424], 99.90th=[35914], 99.95th=[36439], 00:36:11.418 | 99.99th=[36439] 00:36:11.418 bw ( KiB/s): min= 2560, max= 4552, per=4.36%, avg=2818.15, stdev=432.20, samples=20 00:36:11.418 iops : min= 640, max= 1138, avg=704.50, stdev=108.04, samples=20 00:36:11.418 lat (msec) : 2=0.64%, 4=0.17%, 10=1.25%, 20=13.38%, 50=84.57% 00:36:11.418 cpu : usr=98.89%, sys=0.79%, ctx=18, majf=0, minf=61 00:36:11.418 IO depths : 1=4.6%, 2=9.4%, 4=20.8%, 8=57.0%, 16=8.1%, 32=0.0%, >=64=0.0% 00:36:11.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.418 complete : 0=0.0%, 4=93.1%, 8=1.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.418 issued rwts: total=7063,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.418 filename1: (groupid=0, jobs=1): err= 0: pid=1329269: Wed Oct 30 14:23:08 2024 00:36:11.418 read: IOPS=668, BW=2672KiB/s (2736kB/s)(26.1MiB/10011msec) 00:36:11.418 slat (usec): min=5, max=100, avg=19.25, stdev=14.02 00:36:11.418 clat (usec): min=7485, max=32850, avg=23770.39, stdev=1363.80 00:36:11.418 lat (usec): min=7493, max=32859, avg=23789.64, stdev=1363.36 00:36:11.418 clat percentiles (usec): 00:36:11.418 | 1.00th=[16712], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:36:11.418 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:11.418 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:36:11.418 | 99.00th=[25560], 99.50th=[25560], 99.90th=[29754], 99.95th=[32637], 00:36:11.418 | 99.99th=[32900] 00:36:11.418 bw ( KiB/s): min= 2560, max= 2816, per=4.14%, avg=2674.21, stdev=58.67, samples=19 00:36:11.418 iops : min= 640, max= 704, avg=668.53, stdev=14.66, samples=19 00:36:11.418 lat (msec) : 10=0.27%, 20=1.02%, 50=98.71% 00:36:11.418 cpu : usr=98.77%, sys=0.84%, ctx=63, majf=0, minf=40 00:36:11.418 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:11.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.418 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.418 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.418 filename1: (groupid=0, jobs=1): err= 0: pid=1329270: Wed Oct 30 14:23:08 2024 00:36:11.418 read: IOPS=663, BW=2655KiB/s (2719kB/s)(25.9MiB/10004msec) 00:36:11.418 slat (usec): min=5, max=110, avg=25.55, stdev=17.03 00:36:11.418 clat (usec): min=15173, max=33945, avg=23854.37, stdev=1109.57 00:36:11.418 lat (usec): min=15183, max=33952, avg=23879.92, stdev=1109.93 00:36:11.418 clat percentiles (usec): 00:36:11.418 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:11.418 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:11.418 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:11.418 | 99.00th=[27919], 99.50th=[30802], 99.90th=[33817], 99.95th=[33817], 00:36:11.418 | 99.99th=[33817] 00:36:11.418 bw ( KiB/s): min= 2554, max= 2693, per=4.10%, avg=2653.95, stdev=58.46, samples=19 00:36:11.418 iops : min= 638, max= 673, avg=663.42, stdev=14.64, samples=19 00:36:11.418 lat (msec) : 20=0.72%, 50=99.28% 00:36:11.418 cpu : usr=98.87%, sys=0.82%, ctx=16, majf=0, minf=46 00:36:11.418 IO depths : 1=6.1%, 2=12.1%, 4=24.8%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:11.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.418 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.418 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.418 filename1: (groupid=0, jobs=1): err= 0: pid=1329271: Wed Oct 30 14:23:08 2024 00:36:11.418 read: IOPS=664, BW=2657KiB/s (2721kB/s)(26.1MiB/10044msec) 00:36:11.418 slat (nsec): min=5521, max=95951, avg=28901.81, stdev=17164.86 00:36:11.418 clat (usec): min=8276, max=49892, avg=23759.17, stdev=1814.67 00:36:11.418 lat (usec): min=8282, max=49909, avg=23788.07, stdev=1814.37 00:36:11.418 clat percentiles (usec): 00:36:11.418 | 1.00th=[22152], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:11.418 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:11.418 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:11.418 | 99.00th=[25297], 99.50th=[25560], 99.90th=[49021], 99.95th=[49021], 00:36:11.418 | 99.99th=[50070] 00:36:11.418 bw ( KiB/s): min= 2432, max= 2896, per=4.12%, avg=2666.10, stdev=88.83, samples=20 00:36:11.418 iops : min= 608, max= 724, avg=666.50, stdev=22.24, samples=20 00:36:11.418 lat (msec) : 10=0.39%, 20=0.60%, 50=99.01% 00:36:11.418 cpu : usr=98.93%, sys=0.68%, ctx=96, majf=0, minf=39 00:36:11.418 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:11.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.418 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.418 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.418 filename1: (groupid=0, jobs=1): err= 0: pid=1329272: Wed Oct 30 14:23:08 2024 00:36:11.418 read: IOPS=686, BW=2745KiB/s (2811kB/s)(26.9MiB/10015msec) 00:36:11.418 slat (usec): min=5, max=103, avg=18.46, stdev=14.43 00:36:11.418 clat (usec): min=8507, max=40546, avg=23154.18, stdev=3045.63 00:36:11.418 lat (usec): min=8542, max=40558, avg=23172.64, stdev=3047.43 00:36:11.418 clat percentiles (usec): 00:36:11.418 | 1.00th=[13435], 5.00th=[15926], 10.00th=[18482], 20.00th=[23462], 00:36:11.418 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:11.418 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[25035], 00:36:11.418 | 99.00th=[31589], 99.50th=[34341], 99.90th=[40109], 99.95th=[40633], 00:36:11.418 | 99.99th=[40633] 00:36:11.418 bw ( KiB/s): min= 2560, max= 3200, per=4.24%, avg=2743.70, stdev=167.40, samples=20 00:36:11.418 iops : min= 640, max= 800, avg=685.90, stdev=41.86, samples=20 00:36:11.418 lat (msec) : 10=0.23%, 20=11.38%, 50=88.39% 00:36:11.418 cpu : usr=98.85%, sys=0.78%, ctx=64, majf=0, minf=40 00:36:11.418 IO depths : 1=2.5%, 2=7.8%, 4=22.0%, 8=57.7%, 16=10.0%, 32=0.0%, >=64=0.0% 00:36:11.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.418 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.418 issued rwts: total=6874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.418 filename1: (groupid=0, jobs=1): err= 0: pid=1329273: Wed Oct 30 14:23:08 2024 00:36:11.418 read: IOPS=709, BW=2838KiB/s (2906kB/s)(27.7MiB/10009msec) 00:36:11.418 slat (nsec): min=5580, max=98902, avg=8596.81, stdev=6232.09 00:36:11.418 clat (usec): min=5249, max=33411, avg=22473.80, stdev=3285.24 00:36:11.418 lat (usec): min=5270, max=33420, avg=22482.39, stdev=3285.11 00:36:11.418 clat percentiles (usec): 00:36:11.418 | 1.00th=[11994], 5.00th=[15795], 10.00th=[16188], 20.00th=[23200], 00:36:11.418 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:11.418 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:11.418 | 99.00th=[25560], 99.50th=[25560], 99.90th=[27657], 99.95th=[32637], 00:36:11.418 | 99.99th=[33424] 00:36:11.418 bw ( KiB/s): min= 2554, max= 3880, per=4.31%, avg=2786.63, stdev=310.15, samples=19 00:36:11.418 iops : min= 638, max= 970, avg=696.63, stdev=77.56, samples=19 00:36:11.418 lat (msec) : 10=0.68%, 20=17.16%, 50=82.16% 00:36:11.418 cpu : usr=98.99%, sys=0.71%, ctx=26, majf=0, minf=46 00:36:11.418 IO depths : 1=5.2%, 2=10.4%, 4=21.9%, 8=55.3%, 16=7.4%, 32=0.0%, >=64=0.0% 00:36:11.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.418 complete : 0=0.0%, 4=93.2%, 8=1.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.418 issued rwts: total=7102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.418 filename1: (groupid=0, jobs=1): err= 0: pid=1329274: Wed Oct 30 14:23:08 2024 00:36:11.418 read: IOPS=680, BW=2721KiB/s (2786kB/s)(26.6MiB/10002msec) 00:36:11.418 slat (nsec): min=5597, max=78589, avg=9071.34, stdev=6389.79 00:36:11.418 clat (usec): min=9263, max=39651, avg=23444.91, stdev=2794.94 00:36:11.418 lat (usec): min=9269, max=39725, avg=23453.98, stdev=2795.05 00:36:11.418 clat percentiles (usec): 00:36:11.418 | 1.00th=[11338], 5.00th=[16909], 10.00th=[22938], 20.00th=[23462], 00:36:11.418 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:11.418 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24773], 95.00th=[25035], 00:36:11.418 | 99.00th=[31589], 99.50th=[31851], 99.90th=[35390], 99.95th=[39584], 00:36:11.418 | 99.99th=[39584] 00:36:11.418 bw ( KiB/s): min= 2560, max= 2922, per=4.21%, avg=2723.05, stdev=101.87, samples=19 00:36:11.418 iops : min= 640, max= 730, avg=680.74, stdev=25.41, samples=19 00:36:11.418 lat (msec) : 10=0.21%, 20=8.17%, 50=91.62% 00:36:11.418 cpu : usr=98.63%, sys=0.95%, ctx=147, majf=0, minf=60 00:36:11.418 IO depths : 1=4.8%, 2=10.2%, 4=22.3%, 8=54.8%, 16=7.8%, 32=0.0%, >=64=0.0% 00:36:11.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.418 complete : 0=0.0%, 4=93.5%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.418 issued rwts: total=6804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.418 filename1: (groupid=0, jobs=1): err= 0: pid=1329275: Wed Oct 30 14:23:08 2024 00:36:11.418 read: IOPS=666, BW=2668KiB/s (2732kB/s)(26.1MiB/10003msec) 00:36:11.418 slat (nsec): min=5589, max=95799, avg=27827.05, stdev=16591.30 00:36:11.418 clat (usec): min=7435, max=44170, avg=23732.28, stdev=1679.89 00:36:11.418 lat (usec): min=7442, max=44187, avg=23760.11, stdev=1680.13 00:36:11.418 clat percentiles (usec): 00:36:11.418 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:11.418 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:11.418 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:11.418 | 99.00th=[25035], 99.50th=[25297], 99.90th=[44303], 99.95th=[44303], 00:36:11.418 | 99.99th=[44303] 00:36:11.418 bw ( KiB/s): min= 2436, max= 2688, per=4.11%, avg=2654.21, stdev=71.70, samples=19 00:36:11.418 iops : min= 609, max= 672, avg=663.53, stdev=17.96, samples=19 00:36:11.418 lat (msec) : 10=0.48%, 20=0.48%, 50=99.04% 00:36:11.418 cpu : usr=98.90%, sys=0.72%, ctx=66, majf=0, minf=35 00:36:11.418 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:11.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.419 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.419 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.419 filename1: (groupid=0, jobs=1): err= 0: pid=1329276: Wed Oct 30 14:23:08 2024 00:36:11.419 read: IOPS=677, BW=2709KiB/s (2774kB/s)(26.5MiB/10003msec) 00:36:11.419 slat (nsec): min=5590, max=85289, avg=12801.82, stdev=10767.28 00:36:11.419 clat (usec): min=13213, max=39332, avg=23550.78, stdev=3127.83 00:36:11.419 lat (usec): min=13229, max=39378, avg=23563.58, stdev=3129.41 00:36:11.419 clat percentiles (usec): 00:36:11.419 | 1.00th=[14746], 5.00th=[17171], 10.00th=[19268], 20.00th=[23200], 00:36:11.419 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:11.419 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[28705], 00:36:11.419 | 99.00th=[34341], 99.50th=[35914], 99.90th=[38536], 99.95th=[38536], 00:36:11.419 | 99.99th=[39584] 00:36:11.419 bw ( KiB/s): min= 2608, max= 2960, per=4.19%, avg=2707.89, stdev=81.41, samples=19 00:36:11.419 iops : min= 652, max= 740, avg=676.95, stdev=20.31, samples=19 00:36:11.419 lat (msec) : 20=12.22%, 50=87.78% 00:36:11.419 cpu : usr=98.77%, sys=0.93%, ctx=19, majf=0, minf=35 00:36:11.419 IO depths : 1=1.4%, 2=3.0%, 4=8.9%, 8=73.7%, 16=13.1%, 32=0.0%, >=64=0.0% 00:36:11.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.419 complete : 0=0.0%, 4=90.3%, 8=5.9%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.419 issued rwts: total=6774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.419 filename2: (groupid=0, jobs=1): err= 0: pid=1329277: Wed Oct 30 14:23:08 2024 00:36:11.419 read: IOPS=666, BW=2668KiB/s (2732kB/s)(26.1MiB/10004msec) 00:36:11.419 slat (nsec): min=5512, max=88494, avg=21713.82, stdev=14806.48 00:36:11.419 clat (usec): min=7451, max=44458, avg=23803.03, stdev=1719.64 00:36:11.419 lat (usec): min=7457, max=44479, avg=23824.75, stdev=1719.01 00:36:11.419 clat percentiles (usec): 00:36:11.419 | 1.00th=[17957], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:11.419 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:11.419 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:11.419 | 99.00th=[25297], 99.50th=[25560], 99.90th=[44303], 99.95th=[44303], 00:36:11.419 | 99.99th=[44303] 00:36:11.419 bw ( KiB/s): min= 2432, max= 2688, per=4.11%, avg=2654.00, stdev=72.38, samples=19 00:36:11.419 iops : min= 608, max= 672, avg=663.47, stdev=18.13, samples=19 00:36:11.419 lat (msec) : 10=0.48%, 20=0.60%, 50=98.92% 00:36:11.419 cpu : usr=97.46%, sys=1.53%, ctx=856, majf=0, minf=36 00:36:11.419 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:11.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.419 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.419 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.419 filename2: (groupid=0, jobs=1): err= 0: pid=1329278: Wed Oct 30 14:23:08 2024 00:36:11.419 read: IOPS=671, BW=2687KiB/s (2751kB/s)(26.3MiB/10010msec) 00:36:11.419 slat (nsec): min=5601, max=85263, avg=15240.10, stdev=13788.63 00:36:11.419 clat (usec): min=13410, max=38434, avg=23697.15, stdev=2049.55 00:36:11.419 lat (usec): min=13421, max=38440, avg=23712.39, stdev=2049.86 00:36:11.419 clat percentiles (usec): 00:36:11.419 | 1.00th=[16319], 5.00th=[20055], 10.00th=[23200], 20.00th=[23462], 00:36:11.419 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:11.419 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:11.419 | 99.00th=[32637], 99.50th=[33817], 99.90th=[38011], 99.95th=[38536], 00:36:11.419 | 99.99th=[38536] 00:36:11.419 bw ( KiB/s): min= 2560, max= 2832, per=4.15%, avg=2681.79, stdev=54.16, samples=19 00:36:11.419 iops : min= 640, max= 708, avg=670.42, stdev=13.54, samples=19 00:36:11.419 lat (msec) : 20=4.71%, 50=95.29% 00:36:11.419 cpu : usr=98.47%, sys=1.08%, ctx=143, majf=0, minf=38 00:36:11.419 IO depths : 1=5.4%, 2=11.0%, 4=22.9%, 8=53.5%, 16=7.2%, 32=0.0%, >=64=0.0% 00:36:11.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.419 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.419 issued rwts: total=6724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.419 filename2: (groupid=0, jobs=1): err= 0: pid=1329279: Wed Oct 30 14:23:08 2024 00:36:11.419 read: IOPS=694, BW=2780KiB/s (2846kB/s)(27.2MiB/10004msec) 00:36:11.419 slat (nsec): min=5439, max=81471, avg=17224.15, stdev=14008.20 00:36:11.419 clat (usec): min=6444, max=44071, avg=22921.64, stdev=4419.12 00:36:11.419 lat (usec): min=6449, max=44089, avg=22938.87, stdev=4420.55 00:36:11.419 clat percentiles (usec): 00:36:11.419 | 1.00th=[11076], 5.00th=[14877], 10.00th=[16909], 20.00th=[19792], 00:36:11.419 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:11.419 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25297], 95.00th=[30016], 00:36:11.419 | 99.00th=[37487], 99.50th=[39060], 99.90th=[43779], 99.95th=[44303], 00:36:11.419 | 99.99th=[44303] 00:36:11.419 bw ( KiB/s): min= 2501, max= 2912, per=4.26%, avg=2751.95, stdev=91.06, samples=19 00:36:11.419 iops : min= 625, max= 728, avg=687.95, stdev=22.81, samples=19 00:36:11.419 lat (msec) : 10=0.60%, 20=19.86%, 50=79.53% 00:36:11.419 cpu : usr=98.77%, sys=0.94%, ctx=21, majf=0, minf=42 00:36:11.419 IO depths : 1=0.5%, 2=2.5%, 4=9.9%, 8=72.8%, 16=14.2%, 32=0.0%, >=64=0.0% 00:36:11.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.419 complete : 0=0.0%, 4=90.8%, 8=5.7%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.419 issued rwts: total=6952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.419 filename2: (groupid=0, jobs=1): err= 0: pid=1329280: Wed Oct 30 14:23:08 2024 00:36:11.419 read: IOPS=677, BW=2710KiB/s (2775kB/s)(26.5MiB/10018msec) 00:36:11.419 slat (nsec): min=5467, max=96904, avg=16894.14, stdev=14152.04 00:36:11.419 clat (usec): min=8432, max=37616, avg=23474.05, stdev=2295.59 00:36:11.419 lat (usec): min=8453, max=37622, avg=23490.94, stdev=2295.81 00:36:11.419 clat percentiles (usec): 00:36:11.419 | 1.00th=[13829], 5.00th=[17171], 10.00th=[23200], 20.00th=[23462], 00:36:11.419 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:11.419 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:36:11.419 | 99.00th=[27919], 99.50th=[32113], 99.90th=[37487], 99.95th=[37487], 00:36:11.419 | 99.99th=[37487] 00:36:11.419 bw ( KiB/s): min= 2560, max= 3104, per=4.19%, avg=2708.50, stdev=110.08, samples=20 00:36:11.419 iops : min= 640, max= 776, avg=677.10, stdev=27.53, samples=20 00:36:11.419 lat (msec) : 10=0.24%, 20=5.84%, 50=93.93% 00:36:11.419 cpu : usr=98.92%, sys=0.77%, ctx=22, majf=0, minf=53 00:36:11.419 IO depths : 1=5.5%, 2=11.3%, 4=23.8%, 8=52.4%, 16=7.0%, 32=0.0%, >=64=0.0% 00:36:11.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.419 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.419 issued rwts: total=6786,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.419 filename2: (groupid=0, jobs=1): err= 0: pid=1329281: Wed Oct 30 14:23:08 2024 00:36:11.419 read: IOPS=671, BW=2688KiB/s (2753kB/s)(26.3MiB/10003msec) 00:36:11.419 slat (usec): min=5, max=110, avg=23.60, stdev=18.05 00:36:11.419 clat (usec): min=4306, max=43866, avg=23602.61, stdev=2743.49 00:36:11.419 lat (usec): min=4312, max=43884, avg=23626.20, stdev=2745.02 00:36:11.419 clat percentiles (usec): 00:36:11.419 | 1.00th=[14353], 5.00th=[19268], 10.00th=[23200], 20.00th=[23462], 00:36:11.419 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:11.419 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:36:11.419 | 99.00th=[32900], 99.50th=[35914], 99.90th=[43779], 99.95th=[43779], 00:36:11.419 | 99.99th=[43779] 00:36:11.419 bw ( KiB/s): min= 2436, max= 2976, per=4.14%, avg=2675.26, stdev=121.10, samples=19 00:36:11.419 iops : min= 609, max= 744, avg=668.79, stdev=30.30, samples=19 00:36:11.419 lat (msec) : 10=0.57%, 20=5.18%, 50=94.26% 00:36:11.419 cpu : usr=98.87%, sys=0.77%, ctx=53, majf=0, minf=56 00:36:11.419 IO depths : 1=3.3%, 2=7.8%, 4=18.5%, 8=60.0%, 16=10.4%, 32=0.0%, >=64=0.0% 00:36:11.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.419 complete : 0=0.0%, 4=92.8%, 8=2.7%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.419 issued rwts: total=6722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.419 filename2: (groupid=0, jobs=1): err= 0: pid=1329282: Wed Oct 30 14:23:08 2024 00:36:11.419 read: IOPS=664, BW=2659KiB/s (2722kB/s)(26.0MiB/10014msec) 00:36:11.419 slat (nsec): min=5604, max=78199, avg=16647.89, stdev=12838.74 00:36:11.419 clat (usec): min=12533, max=42897, avg=23926.09, stdev=1070.83 00:36:11.419 lat (usec): min=12564, max=42919, avg=23942.73, stdev=1070.60 00:36:11.419 clat percentiles (usec): 00:36:11.419 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:36:11.419 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:11.419 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:11.419 | 99.00th=[25822], 99.50th=[30540], 99.90th=[32637], 99.95th=[33162], 00:36:11.419 | 99.99th=[42730] 00:36:11.419 bw ( KiB/s): min= 2544, max= 2704, per=4.11%, avg=2658.50, stdev=51.70, samples=20 00:36:11.419 iops : min= 636, max= 676, avg=664.60, stdev=12.91, samples=20 00:36:11.419 lat (msec) : 20=0.65%, 50=99.35% 00:36:11.419 cpu : usr=99.18%, sys=0.55%, ctx=14, majf=0, minf=65 00:36:11.419 IO depths : 1=4.3%, 2=10.5%, 4=25.0%, 8=52.0%, 16=8.2%, 32=0.0%, >=64=0.0% 00:36:11.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.419 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.419 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.419 filename2: (groupid=0, jobs=1): err= 0: pid=1329283: Wed Oct 30 14:23:08 2024 00:36:11.419 read: IOPS=666, BW=2667KiB/s (2731kB/s)(26.1MiB/10005msec) 00:36:11.419 slat (nsec): min=5416, max=91215, avg=26290.11, stdev=14180.22 00:36:11.419 clat (usec): min=5997, max=44935, avg=23751.32, stdev=1690.25 00:36:11.419 lat (usec): min=6003, max=44952, avg=23777.61, stdev=1690.43 00:36:11.419 clat percentiles (usec): 00:36:11.419 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:11.419 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:11.419 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:11.419 | 99.00th=[25035], 99.50th=[25560], 99.90th=[44827], 99.95th=[44827], 00:36:11.419 | 99.99th=[44827] 00:36:11.419 bw ( KiB/s): min= 2432, max= 2688, per=4.11%, avg=2654.00, stdev=72.38, samples=19 00:36:11.419 iops : min= 608, max= 672, avg=663.47, stdev=18.13, samples=19 00:36:11.419 lat (msec) : 10=0.45%, 20=0.54%, 50=99.01% 00:36:11.419 cpu : usr=99.11%, sys=0.61%, ctx=16, majf=0, minf=42 00:36:11.419 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:11.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.419 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.419 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.419 filename2: (groupid=0, jobs=1): err= 0: pid=1329284: Wed Oct 30 14:23:08 2024 00:36:11.420 read: IOPS=670, BW=2681KiB/s (2745kB/s)(26.2MiB/10004msec) 00:36:11.420 slat (nsec): min=5522, max=98151, avg=16886.58, stdev=14606.82 00:36:11.420 clat (usec): min=4665, max=49159, avg=23757.68, stdev=2511.76 00:36:11.420 lat (usec): min=4673, max=49180, avg=23774.56, stdev=2511.72 00:36:11.420 clat percentiles (usec): 00:36:11.420 | 1.00th=[15139], 5.00th=[20055], 10.00th=[23200], 20.00th=[23462], 00:36:11.420 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:11.420 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24773], 95.00th=[25297], 00:36:11.420 | 99.00th=[32113], 99.50th=[35390], 99.90th=[43779], 99.95th=[43779], 00:36:11.420 | 99.99th=[49021] 00:36:11.420 bw ( KiB/s): min= 2436, max= 2912, per=4.13%, avg=2667.68, stdev=87.15, samples=19 00:36:11.420 iops : min= 609, max= 728, avg=666.89, stdev=21.77, samples=19 00:36:11.420 lat (msec) : 10=0.54%, 20=4.40%, 50=95.06% 00:36:11.420 cpu : usr=98.93%, sys=0.76%, ctx=34, majf=0, minf=38 00:36:11.420 IO depths : 1=2.0%, 2=4.2%, 4=9.5%, 8=70.4%, 16=14.0%, 32=0.0%, >=64=0.0% 00:36:11.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.420 complete : 0=0.0%, 4=90.9%, 8=6.7%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.420 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.420 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.420 00:36:11.420 Run status group 0 (all jobs): 00:36:11.420 READ: bw=63.1MiB/s (66.2MB/s), 2655KiB/s-2838KiB/s (2719kB/s-2906kB/s), io=634MiB (665MB), run=10002-10044msec 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.420 bdev_null0 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.420 [2024-10-30 14:23:08.395242] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.420 bdev_null1 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:11.420 { 00:36:11.420 "params": { 00:36:11.420 "name": "Nvme$subsystem", 00:36:11.420 "trtype": "$TEST_TRANSPORT", 00:36:11.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:11.420 "adrfam": "ipv4", 00:36:11.420 "trsvcid": "$NVMF_PORT", 00:36:11.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:11.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:11.420 "hdgst": ${hdgst:-false}, 00:36:11.420 "ddgst": ${ddgst:-false} 00:36:11.420 }, 00:36:11.420 "method": "bdev_nvme_attach_controller" 00:36:11.420 } 00:36:11.420 EOF 00:36:11.420 )") 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:11.420 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:11.421 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:11.421 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:11.421 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:11.421 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:11.421 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:11.421 14:23:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:11.421 14:23:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:11.421 { 00:36:11.421 "params": { 00:36:11.421 "name": "Nvme$subsystem", 00:36:11.421 "trtype": "$TEST_TRANSPORT", 00:36:11.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:11.421 "adrfam": "ipv4", 00:36:11.421 "trsvcid": "$NVMF_PORT", 00:36:11.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:11.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:11.421 "hdgst": ${hdgst:-false}, 00:36:11.421 "ddgst": ${ddgst:-false} 00:36:11.421 }, 00:36:11.421 "method": "bdev_nvme_attach_controller" 00:36:11.421 } 00:36:11.421 EOF 00:36:11.421 )") 00:36:11.421 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:11.421 14:23:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:11.421 14:23:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:11.421 14:23:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:11.421 14:23:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:11.421 14:23:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:11.421 "params": { 00:36:11.421 "name": "Nvme0", 00:36:11.421 "trtype": "tcp", 00:36:11.421 "traddr": "10.0.0.2", 00:36:11.421 "adrfam": "ipv4", 00:36:11.421 "trsvcid": "4420", 00:36:11.421 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:11.421 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:11.421 "hdgst": false, 00:36:11.421 "ddgst": false 00:36:11.421 }, 00:36:11.421 "method": "bdev_nvme_attach_controller" 00:36:11.421 },{ 00:36:11.421 "params": { 00:36:11.421 "name": "Nvme1", 00:36:11.421 "trtype": "tcp", 00:36:11.421 "traddr": "10.0.0.2", 00:36:11.421 "adrfam": "ipv4", 00:36:11.421 "trsvcid": "4420", 00:36:11.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:11.421 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:11.421 "hdgst": false, 00:36:11.421 "ddgst": false 00:36:11.421 }, 00:36:11.421 "method": "bdev_nvme_attach_controller" 00:36:11.421 }' 00:36:11.421 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:11.421 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:11.421 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:11.421 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:11.421 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:11.421 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:11.421 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:11.421 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:11.421 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:11.421 14:23:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:11.421 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:11.421 ... 00:36:11.421 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:11.421 ... 00:36:11.421 fio-3.35 00:36:11.421 Starting 4 threads 00:36:16.710 00:36:16.710 filename0: (groupid=0, jobs=1): err= 0: pid=1331637: Wed Oct 30 14:23:14 2024 00:36:16.710 read: IOPS=2518, BW=19.7MiB/s (20.6MB/s)(98.4MiB/5002msec) 00:36:16.710 slat (nsec): min=5411, max=76829, avg=7225.23, stdev=2562.01 00:36:16.710 clat (usec): min=1526, max=5505, avg=3155.92, stdev=449.44 00:36:16.710 lat (usec): min=1532, max=5514, avg=3163.14, stdev=449.50 00:36:16.711 clat percentiles (usec): 00:36:16.711 | 1.00th=[ 2180], 5.00th=[ 2507], 10.00th=[ 2638], 20.00th=[ 2671], 00:36:16.711 | 30.00th=[ 2704], 40.00th=[ 3195], 50.00th=[ 3425], 60.00th=[ 3458], 00:36:16.711 | 70.00th=[ 3458], 80.00th=[ 3490], 90.00th=[ 3523], 95.00th=[ 3556], 00:36:16.711 | 99.00th=[ 4080], 99.50th=[ 4621], 99.90th=[ 5211], 99.95th=[ 5276], 00:36:16.711 | 99.99th=[ 5473] 00:36:16.711 bw ( KiB/s): min=18320, max=23808, per=25.02%, avg=20155.20, stdev=2389.19, samples=10 00:36:16.711 iops : min= 2290, max= 2976, avg=2519.40, stdev=298.65, samples=10 00:36:16.711 lat (msec) : 2=0.35%, 4=98.40%, 10=1.25% 00:36:16.711 cpu : usr=96.30%, sys=3.44%, ctx=6, majf=0, minf=67 00:36:16.711 IO depths : 1=0.1%, 2=0.1%, 4=73.2%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:16.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.711 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.711 issued rwts: total=12600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.711 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:16.711 filename0: (groupid=0, jobs=1): err= 0: pid=1331638: Wed Oct 30 14:23:14 2024 00:36:16.711 read: IOPS=2515, BW=19.6MiB/s (20.6MB/s)(98.3MiB/5002msec) 00:36:16.711 slat (nsec): min=5420, max=82398, avg=7530.99, stdev=3031.83 00:36:16.711 clat (usec): min=1523, max=5411, avg=3160.85, stdev=442.03 00:36:16.711 lat (usec): min=1532, max=5419, avg=3168.38, stdev=442.21 00:36:16.711 clat percentiles (usec): 00:36:16.711 | 1.00th=[ 2180], 5.00th=[ 2507], 10.00th=[ 2606], 20.00th=[ 2671], 00:36:16.711 | 30.00th=[ 2704], 40.00th=[ 3228], 50.00th=[ 3425], 60.00th=[ 3458], 00:36:16.711 | 70.00th=[ 3458], 80.00th=[ 3490], 90.00th=[ 3490], 95.00th=[ 3589], 00:36:16.711 | 99.00th=[ 4015], 99.50th=[ 4359], 99.90th=[ 5014], 99.95th=[ 5145], 00:36:16.711 | 99.99th=[ 5407] 00:36:16.711 bw ( KiB/s): min=18240, max=23824, per=24.97%, avg=20116.80, stdev=2396.45, samples=10 00:36:16.711 iops : min= 2280, max= 2978, avg=2514.60, stdev=299.56, samples=10 00:36:16.711 lat (msec) : 2=0.37%, 4=98.61%, 10=1.03% 00:36:16.711 cpu : usr=96.40%, sys=3.34%, ctx=6, majf=0, minf=66 00:36:16.711 IO depths : 1=0.1%, 2=0.1%, 4=71.7%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:16.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.711 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.711 issued rwts: total=12581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.711 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:16.711 filename1: (groupid=0, jobs=1): err= 0: pid=1331639: Wed Oct 30 14:23:14 2024 00:36:16.711 read: IOPS=2530, BW=19.8MiB/s (20.7MB/s)(98.9MiB/5004msec) 00:36:16.711 slat (nsec): min=5416, max=57749, avg=7614.61, stdev=3138.43 00:36:16.711 clat (usec): min=1125, max=5327, avg=3142.35, stdev=442.45 00:36:16.711 lat (usec): min=1141, max=5332, avg=3149.96, stdev=442.57 00:36:16.711 clat percentiles (usec): 00:36:16.711 | 1.00th=[ 2089], 5.00th=[ 2507], 10.00th=[ 2638], 20.00th=[ 2671], 00:36:16.711 | 30.00th=[ 2704], 40.00th=[ 3097], 50.00th=[ 3425], 60.00th=[ 3458], 00:36:16.711 | 70.00th=[ 3458], 80.00th=[ 3490], 90.00th=[ 3523], 95.00th=[ 3556], 00:36:16.711 | 99.00th=[ 3916], 99.50th=[ 4293], 99.90th=[ 4817], 99.95th=[ 4948], 00:36:16.711 | 99.99th=[ 5342] 00:36:16.711 bw ( KiB/s): min=18336, max=23760, per=25.13%, avg=20249.50, stdev=2425.87, samples=10 00:36:16.711 iops : min= 2292, max= 2970, avg=2531.10, stdev=303.09, samples=10 00:36:16.711 lat (msec) : 2=0.77%, 4=98.33%, 10=0.89% 00:36:16.711 cpu : usr=95.96%, sys=3.76%, ctx=6, majf=0, minf=62 00:36:16.711 IO depths : 1=0.1%, 2=0.3%, 4=67.4%, 8=32.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:16.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.711 complete : 0=0.0%, 4=96.1%, 8=3.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.711 issued rwts: total=12664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.711 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:16.711 filename1: (groupid=0, jobs=1): err= 0: pid=1331640: Wed Oct 30 14:23:14 2024 00:36:16.711 read: IOPS=2509, BW=19.6MiB/s (20.6MB/s)(98.1MiB/5002msec) 00:36:16.711 slat (nsec): min=5411, max=39017, avg=7535.90, stdev=3140.04 00:36:16.711 clat (usec): min=1533, max=5814, avg=3168.57, stdev=448.09 00:36:16.711 lat (usec): min=1539, max=5843, avg=3176.11, stdev=448.38 00:36:16.711 clat percentiles (usec): 00:36:16.711 | 1.00th=[ 2245], 5.00th=[ 2507], 10.00th=[ 2638], 20.00th=[ 2671], 00:36:16.711 | 30.00th=[ 2704], 40.00th=[ 3261], 50.00th=[ 3425], 60.00th=[ 3458], 00:36:16.711 | 70.00th=[ 3458], 80.00th=[ 3490], 90.00th=[ 3523], 95.00th=[ 3589], 00:36:16.711 | 99.00th=[ 4080], 99.50th=[ 4686], 99.90th=[ 5211], 99.95th=[ 5735], 00:36:16.711 | 99.99th=[ 5800] 00:36:16.711 bw ( KiB/s): min=18240, max=23728, per=24.92%, avg=20075.00, stdev=2347.59, samples=10 00:36:16.711 iops : min= 2280, max= 2966, avg=2509.30, stdev=293.33, samples=10 00:36:16.711 lat (msec) : 2=0.22%, 4=98.38%, 10=1.40% 00:36:16.711 cpu : usr=96.12%, sys=3.60%, ctx=6, majf=0, minf=120 00:36:16.711 IO depths : 1=0.1%, 2=0.1%, 4=70.6%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:16.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.711 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.711 issued rwts: total=12552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.711 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:16.711 00:36:16.711 Run status group 0 (all jobs): 00:36:16.711 READ: bw=78.7MiB/s (82.5MB/s), 19.6MiB/s-19.8MiB/s (20.6MB/s-20.7MB/s), io=394MiB (413MB), run=5002-5004msec 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.711 00:36:16.711 real 0m24.619s 00:36:16.711 user 5m16.173s 00:36:16.711 sys 0m4.887s 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:16.711 14:23:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.711 ************************************ 00:36:16.711 END TEST fio_dif_rand_params 00:36:16.711 ************************************ 00:36:16.711 14:23:14 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:16.711 14:23:14 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:16.711 14:23:14 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:16.711 14:23:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:16.711 ************************************ 00:36:16.711 START TEST fio_dif_digest 00:36:16.711 ************************************ 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:16.711 bdev_null0 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:16.711 14:23:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:16.712 [2024-10-30 14:23:14.983609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:16.712 { 00:36:16.712 "params": { 00:36:16.712 "name": "Nvme$subsystem", 00:36:16.712 "trtype": "$TEST_TRANSPORT", 00:36:16.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:16.712 "adrfam": "ipv4", 00:36:16.712 "trsvcid": "$NVMF_PORT", 00:36:16.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:16.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:16.712 "hdgst": ${hdgst:-false}, 00:36:16.712 "ddgst": ${ddgst:-false} 00:36:16.712 }, 00:36:16.712 "method": "bdev_nvme_attach_controller" 00:36:16.712 } 00:36:16.712 EOF 00:36:16.712 )") 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:16.712 14:23:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:36:16.712 14:23:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:36:16.712 14:23:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:16.712 "params": { 00:36:16.712 "name": "Nvme0", 00:36:16.712 "trtype": "tcp", 00:36:16.712 "traddr": "10.0.0.2", 00:36:16.712 "adrfam": "ipv4", 00:36:16.712 "trsvcid": "4420", 00:36:16.712 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:16.712 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:16.712 "hdgst": true, 00:36:16.712 "ddgst": true 00:36:16.712 }, 00:36:16.712 "method": "bdev_nvme_attach_controller" 00:36:16.712 }' 00:36:16.973 14:23:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:16.973 14:23:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:16.973 14:23:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:16.973 14:23:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:16.973 14:23:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:16.973 14:23:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:16.973 14:23:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:16.973 14:23:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:16.973 14:23:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:16.973 14:23:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.235 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:17.235 ... 00:36:17.235 fio-3.35 00:36:17.235 Starting 3 threads 00:36:29.646 00:36:29.646 filename0: (groupid=0, jobs=1): err= 0: pid=1332977: Wed Oct 30 14:23:25 2024 00:36:29.646 read: IOPS=340, BW=42.5MiB/s (44.6MB/s)(427MiB/10046msec) 00:36:29.646 slat (nsec): min=5911, max=40912, avg=8192.15, stdev=1829.68 00:36:29.646 clat (usec): min=5053, max=50374, avg=8791.88, stdev=1711.58 00:36:29.646 lat (usec): min=5060, max=50380, avg=8800.07, stdev=1711.98 00:36:29.646 clat percentiles (usec): 00:36:29.646 | 1.00th=[ 6128], 5.00th=[ 6718], 10.00th=[ 6980], 20.00th=[ 7373], 00:36:29.646 | 30.00th=[ 7701], 40.00th=[ 8160], 50.00th=[ 8848], 60.00th=[ 9372], 00:36:29.646 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10552], 95.00th=[10945], 00:36:29.646 | 99.00th=[11600], 99.50th=[11994], 99.90th=[12780], 99.95th=[47449], 00:36:29.646 | 99.99th=[50594] 00:36:29.646 bw ( KiB/s): min=38912, max=47360, per=39.61%, avg=43737.60, stdev=2703.23, samples=20 00:36:29.646 iops : min= 304, max= 370, avg=341.70, stdev=21.12, samples=20 00:36:29.646 lat (msec) : 10=76.89%, 20=23.05%, 50=0.03%, 100=0.03% 00:36:29.646 cpu : usr=93.90%, sys=5.84%, ctx=25, majf=0, minf=199 00:36:29.646 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:29.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.646 issued rwts: total=3419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.646 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:29.646 filename0: (groupid=0, jobs=1): err= 0: pid=1332978: Wed Oct 30 14:23:25 2024 00:36:29.646 read: IOPS=205, BW=25.6MiB/s (26.9MB/s)(257MiB/10035msec) 00:36:29.646 slat (nsec): min=5872, max=38800, avg=8568.19, stdev=2003.59 00:36:29.646 clat (usec): min=6317, max=93066, avg=14618.23, stdev=15023.04 00:36:29.646 lat (usec): min=6324, max=93074, avg=14626.80, stdev=15022.79 00:36:29.646 clat percentiles (usec): 00:36:29.646 | 1.00th=[ 7111], 5.00th=[ 7701], 10.00th=[ 8029], 20.00th=[ 8455], 00:36:29.646 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9503], 00:36:29.646 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[49546], 95.00th=[50594], 00:36:29.646 | 99.00th=[52167], 99.50th=[90702], 99.90th=[91751], 99.95th=[92799], 00:36:29.646 | 99.99th=[92799] 00:36:29.646 bw ( KiB/s): min=15104, max=38912, per=23.82%, avg=26304.00, stdev=7854.03, samples=20 00:36:29.646 iops : min= 118, max= 304, avg=205.50, stdev=61.36, samples=20 00:36:29.646 lat (msec) : 10=71.77%, 20=15.69%, 50=5.20%, 100=7.34% 00:36:29.646 cpu : usr=96.33%, sys=3.44%, ctx=12, majf=0, minf=31 00:36:29.646 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:29.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.646 issued rwts: total=2058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.646 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:29.646 filename0: (groupid=0, jobs=1): err= 0: pid=1332979: Wed Oct 30 14:23:25 2024 00:36:29.646 read: IOPS=317, BW=39.7MiB/s (41.6MB/s)(399MiB/10043msec) 00:36:29.646 slat (nsec): min=5707, max=57426, avg=8662.06, stdev=2406.59 00:36:29.646 clat (usec): min=5784, max=90678, avg=9424.56, stdev=3437.74 00:36:29.646 lat (usec): min=5793, max=90687, avg=9433.22, stdev=3437.85 00:36:29.646 clat percentiles (usec): 00:36:29.646 | 1.00th=[ 6456], 5.00th=[ 6980], 10.00th=[ 7308], 20.00th=[ 7635], 00:36:29.646 | 30.00th=[ 8094], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[ 9896], 00:36:29.646 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11207], 95.00th=[11731], 00:36:29.646 | 99.00th=[12518], 99.50th=[13566], 99.90th=[52691], 99.95th=[90702], 00:36:29.646 | 99.99th=[90702] 00:36:29.646 bw ( KiB/s): min=36352, max=45312, per=36.94%, avg=40793.60, stdev=2710.87, samples=20 00:36:29.646 iops : min= 284, max= 354, avg=318.70, stdev=21.18, samples=20 00:36:29.646 lat (msec) : 10=63.44%, 20=36.19%, 50=0.09%, 100=0.28% 00:36:29.646 cpu : usr=95.40%, sys=4.34%, ctx=18, majf=0, minf=181 00:36:29.646 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:29.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.646 issued rwts: total=3189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.646 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:29.646 00:36:29.646 Run status group 0 (all jobs): 00:36:29.646 READ: bw=108MiB/s (113MB/s), 25.6MiB/s-42.5MiB/s (26.9MB/s-44.6MB/s), io=1083MiB (1136MB), run=10035-10046msec 00:36:29.646 14:23:26 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:29.646 14:23:26 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:29.646 14:23:26 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:29.646 14:23:26 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:29.646 14:23:26 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:29.646 14:23:26 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:29.646 14:23:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.646 14:23:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:29.646 14:23:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.646 14:23:26 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:29.646 14:23:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.646 14:23:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:29.646 14:23:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.646 00:36:29.646 real 0m11.150s 00:36:29.646 user 0m44.626s 00:36:29.646 sys 0m1.710s 00:36:29.646 14:23:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:29.646 14:23:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:29.646 ************************************ 00:36:29.646 END TEST fio_dif_digest 00:36:29.646 ************************************ 00:36:29.646 14:23:26 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:29.646 14:23:26 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:29.646 14:23:26 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:29.646 14:23:26 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:36:29.646 14:23:26 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:29.646 14:23:26 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:36:29.646 14:23:26 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:29.646 14:23:26 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:29.646 rmmod nvme_tcp 00:36:29.646 rmmod nvme_fabrics 00:36:29.646 rmmod nvme_keyring 00:36:29.646 14:23:26 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:29.646 14:23:26 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:36:29.646 14:23:26 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:36:29.647 14:23:26 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1322637 ']' 00:36:29.647 14:23:26 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1322637 00:36:29.647 14:23:26 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1322637 ']' 00:36:29.647 14:23:26 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1322637 00:36:29.647 14:23:26 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:36:29.647 14:23:26 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:29.647 14:23:26 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1322637 00:36:29.647 14:23:26 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:29.647 14:23:26 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:29.647 14:23:26 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1322637' 00:36:29.647 killing process with pid 1322637 00:36:29.647 14:23:26 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1322637 00:36:29.647 14:23:26 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1322637 00:36:29.647 14:23:26 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:29.647 14:23:26 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:31.561 Waiting for block devices as requested 00:36:31.561 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:31.561 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:31.821 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:31.821 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:31.821 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:31.821 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:32.082 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:32.082 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:32.082 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:32.343 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:32.343 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:32.605 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:32.605 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:32.605 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:32.866 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:32.866 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:32.866 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:33.126 14:23:31 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:33.126 14:23:31 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:33.126 14:23:31 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:36:33.126 14:23:31 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:36:33.126 14:23:31 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:33.126 14:23:31 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:36:33.126 14:23:31 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:33.126 14:23:31 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:33.126 14:23:31 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:33.126 14:23:31 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:33.126 14:23:31 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:35.672 14:23:33 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:35.672 00:36:35.672 real 1m18.162s 00:36:35.672 user 8m3.356s 00:36:35.672 sys 0m21.969s 00:36:35.672 14:23:33 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:35.672 14:23:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:35.672 ************************************ 00:36:35.672 END TEST nvmf_dif 00:36:35.672 ************************************ 00:36:35.672 14:23:33 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:35.672 14:23:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:35.673 14:23:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:35.673 14:23:33 -- common/autotest_common.sh@10 -- # set +x 00:36:35.673 ************************************ 00:36:35.673 START TEST nvmf_abort_qd_sizes 00:36:35.673 ************************************ 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:35.673 * Looking for test storage... 00:36:35.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:35.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.673 --rc genhtml_branch_coverage=1 00:36:35.673 --rc genhtml_function_coverage=1 00:36:35.673 --rc genhtml_legend=1 00:36:35.673 --rc geninfo_all_blocks=1 00:36:35.673 --rc geninfo_unexecuted_blocks=1 00:36:35.673 00:36:35.673 ' 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:35.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.673 --rc genhtml_branch_coverage=1 00:36:35.673 --rc genhtml_function_coverage=1 00:36:35.673 --rc genhtml_legend=1 00:36:35.673 --rc geninfo_all_blocks=1 00:36:35.673 --rc geninfo_unexecuted_blocks=1 00:36:35.673 00:36:35.673 ' 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:35.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.673 --rc genhtml_branch_coverage=1 00:36:35.673 --rc genhtml_function_coverage=1 00:36:35.673 --rc genhtml_legend=1 00:36:35.673 --rc geninfo_all_blocks=1 00:36:35.673 --rc geninfo_unexecuted_blocks=1 00:36:35.673 00:36:35.673 ' 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:35.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.673 --rc genhtml_branch_coverage=1 00:36:35.673 --rc genhtml_function_coverage=1 00:36:35.673 --rc genhtml_legend=1 00:36:35.673 --rc geninfo_all_blocks=1 00:36:35.673 --rc geninfo_unexecuted_blocks=1 00:36:35.673 00:36:35.673 ' 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:35.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:36:35.673 14:23:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:43.815 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:43.815 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:43.815 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:43.815 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:43.815 14:23:40 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:43.815 14:23:41 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:43.815 14:23:41 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:43.815 14:23:41 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:43.815 14:23:41 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:43.815 14:23:41 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:43.815 14:23:41 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:43.815 14:23:41 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:43.815 14:23:41 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:43.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:43.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:36:43.815 00:36:43.815 --- 10.0.0.2 ping statistics --- 00:36:43.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:43.815 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:36:43.815 14:23:41 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:43.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:43.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:36:43.815 00:36:43.815 --- 10.0.0.1 ping statistics --- 00:36:43.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:43.815 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:36:43.815 14:23:41 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:43.815 14:23:41 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:36:43.815 14:23:41 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:43.815 14:23:41 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:47.122 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:47.122 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:47.122 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:47.122 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:47.122 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:47.122 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:47.122 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:47.122 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:47.122 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:47.122 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:47.122 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:47.122 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:47.122 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:47.122 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:47.122 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:47.122 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:47.122 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:47.122 14:23:45 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:47.122 14:23:45 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:47.122 14:23:45 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:47.122 14:23:45 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:47.122 14:23:45 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:47.122 14:23:45 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:47.122 14:23:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:47.122 14:23:45 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:47.122 14:23:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:47.122 14:23:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:47.122 14:23:45 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1342423 00:36:47.122 14:23:45 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1342423 00:36:47.122 14:23:45 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:47.122 14:23:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1342423 ']' 00:36:47.122 14:23:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:47.122 14:23:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:47.122 14:23:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:47.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:47.122 14:23:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:47.122 14:23:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:47.383 [2024-10-30 14:23:45.433264] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:36:47.383 [2024-10-30 14:23:45.433329] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:47.383 [2024-10-30 14:23:45.530938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:47.383 [2024-10-30 14:23:45.585465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:47.383 [2024-10-30 14:23:45.585514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:47.383 [2024-10-30 14:23:45.585523] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:47.383 [2024-10-30 14:23:45.585531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:47.383 [2024-10-30 14:23:45.585537] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:47.383 [2024-10-30 14:23:45.587961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:47.383 [2024-10-30 14:23:45.588123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:47.383 [2024-10-30 14:23:45.588284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:47.383 [2024-10-30 14:23:45.588284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:48.330 14:23:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:48.330 ************************************ 00:36:48.330 START TEST spdk_target_abort 00:36:48.330 ************************************ 00:36:48.330 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:36:48.330 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:48.330 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:36:48.330 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.330 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:48.592 spdk_targetn1 00:36:48.592 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:48.593 [2024-10-30 14:23:46.674341] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:48.593 [2024-10-30 14:23:46.723026] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:48.593 14:23:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:48.855 [2024-10-30 14:23:46.967317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:224 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:36:48.855 [2024-10-30 14:23:46.967368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:001e p:1 m:0 dnr:0 00:36:48.855 [2024-10-30 14:23:46.975395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:480 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:36:48.855 [2024-10-30 14:23:46.975426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:003d p:1 m:0 dnr:0 00:36:48.855 [2024-10-30 14:23:47.006325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1408 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:36:48.855 [2024-10-30 14:23:47.006361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00b1 p:1 m:0 dnr:0 00:36:48.855 [2024-10-30 14:23:47.021262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1896 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:36:48.855 [2024-10-30 14:23:47.021293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00ef p:1 m:0 dnr:0 00:36:48.855 [2024-10-30 14:23:47.034087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2216 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:36:48.855 [2024-10-30 14:23:47.034118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:48.855 [2024-10-30 14:23:47.044328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2576 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:36:48.855 [2024-10-30 14:23:47.044357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:48.855 [2024-10-30 14:23:47.068277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3312 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:36:48.855 [2024-10-30 14:23:47.068308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00a1 p:0 m:0 dnr:0 00:36:48.855 [2024-10-30 14:23:47.095354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:4024 len:8 PRP1 0x200004abe000 PRP2 0x0 00:36:48.855 [2024-10-30 14:23:47.095384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00fa p:0 m:0 dnr:0 00:36:52.159 Initializing NVMe Controllers 00:36:52.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:52.159 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:52.159 Initialization complete. Launching workers. 00:36:52.159 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12166, failed: 8 00:36:52.159 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2925, failed to submit 9249 00:36:52.159 success 778, unsuccessful 2147, failed 0 00:36:52.159 14:23:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:52.159 14:23:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:52.159 [2024-10-30 14:23:50.287699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:2344 len:8 PRP1 0x200004e48000 PRP2 0x0 00:36:52.159 [2024-10-30 14:23:50.287754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:52.159 [2024-10-30 14:23:50.302899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:2720 len:8 PRP1 0x200004e46000 PRP2 0x0 00:36:52.159 [2024-10-30 14:23:50.302922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:52.159 [2024-10-30 14:23:50.318894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:3056 len:8 PRP1 0x200004e46000 PRP2 0x0 00:36:52.159 [2024-10-30 14:23:50.318916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:0087 p:0 m:0 dnr:0 00:36:55.462 Initializing NVMe Controllers 00:36:55.462 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:55.462 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:55.462 Initialization complete. Launching workers. 00:36:55.462 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8472, failed: 3 00:36:55.462 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1246, failed to submit 7229 00:36:55.462 success 303, unsuccessful 943, failed 0 00:36:55.462 14:23:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:55.462 14:23:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:58.765 Initializing NVMe Controllers 00:36:58.765 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:58.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:58.765 Initialization complete. Launching workers. 00:36:58.765 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 44986, failed: 0 00:36:58.765 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2547, failed to submit 42439 00:36:58.765 success 589, unsuccessful 1958, failed 0 00:36:58.765 14:23:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:58.765 14:23:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.765 14:23:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:58.765 14:23:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.765 14:23:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:58.765 14:23:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.765 14:23:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:00.150 14:23:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.150 14:23:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1342423 00:37:00.150 14:23:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1342423 ']' 00:37:00.150 14:23:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1342423 00:37:00.150 14:23:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:37:00.150 14:23:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:00.150 14:23:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1342423 00:37:00.150 14:23:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:00.150 14:23:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:00.150 14:23:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1342423' 00:37:00.150 killing process with pid 1342423 00:37:00.150 14:23:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1342423 00:37:00.150 14:23:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1342423 00:37:00.412 00:37:00.412 real 0m12.189s 00:37:00.412 user 0m49.628s 00:37:00.412 sys 0m2.057s 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:00.412 ************************************ 00:37:00.412 END TEST spdk_target_abort 00:37:00.412 ************************************ 00:37:00.412 14:23:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:00.412 14:23:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:00.412 14:23:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:00.412 14:23:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:00.412 ************************************ 00:37:00.412 START TEST kernel_target_abort 00:37:00.412 ************************************ 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:00.412 14:23:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:03.719 Waiting for block devices as requested 00:37:03.980 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:03.980 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:03.980 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:04.241 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:04.241 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:04.241 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:04.502 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:04.502 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:04.502 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:04.763 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:04.763 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:05.024 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:05.024 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:05.024 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:05.286 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:05.286 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:05.286 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:05.547 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:05.547 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:05.547 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:05.547 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:37:05.547 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:05.547 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:37:05.547 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:05.547 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:05.547 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:05.808 No valid GPT data, bailing 00:37:05.808 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:37:05.809 00:37:05.809 Discovery Log Number of Records 2, Generation counter 2 00:37:05.809 =====Discovery Log Entry 0====== 00:37:05.809 trtype: tcp 00:37:05.809 adrfam: ipv4 00:37:05.809 subtype: current discovery subsystem 00:37:05.809 treq: not specified, sq flow control disable supported 00:37:05.809 portid: 1 00:37:05.809 trsvcid: 4420 00:37:05.809 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:05.809 traddr: 10.0.0.1 00:37:05.809 eflags: none 00:37:05.809 sectype: none 00:37:05.809 =====Discovery Log Entry 1====== 00:37:05.809 trtype: tcp 00:37:05.809 adrfam: ipv4 00:37:05.809 subtype: nvme subsystem 00:37:05.809 treq: not specified, sq flow control disable supported 00:37:05.809 portid: 1 00:37:05.809 trsvcid: 4420 00:37:05.809 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:05.809 traddr: 10.0.0.1 00:37:05.809 eflags: none 00:37:05.809 sectype: none 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:05.809 14:24:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:09.112 Initializing NVMe Controllers 00:37:09.112 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:09.112 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:09.112 Initialization complete. Launching workers. 00:37:09.112 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67644, failed: 0 00:37:09.112 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67644, failed to submit 0 00:37:09.112 success 0, unsuccessful 67644, failed 0 00:37:09.112 14:24:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:09.112 14:24:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:12.415 Initializing NVMe Controllers 00:37:12.415 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:12.415 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:12.415 Initialization complete. Launching workers. 00:37:12.415 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 114223, failed: 0 00:37:12.415 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28750, failed to submit 85473 00:37:12.415 success 0, unsuccessful 28750, failed 0 00:37:12.415 14:24:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:12.415 14:24:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:15.710 Initializing NVMe Controllers 00:37:15.710 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:15.710 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:15.710 Initialization complete. Launching workers. 00:37:15.710 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146034, failed: 0 00:37:15.710 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36534, failed to submit 109500 00:37:15.710 success 0, unsuccessful 36534, failed 0 00:37:15.710 14:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:15.710 14:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:15.710 14:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:37:15.710 14:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:15.710 14:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:15.710 14:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:15.710 14:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:15.710 14:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:37:15.710 14:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:37:15.710 14:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:19.009 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:19.009 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:19.009 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:19.009 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:19.009 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:19.009 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:19.009 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:19.009 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:19.009 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:19.009 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:19.009 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:19.009 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:19.009 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:19.009 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:19.009 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:19.009 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:20.414 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:20.988 00:37:20.988 real 0m20.372s 00:37:20.988 user 0m9.929s 00:37:20.988 sys 0m6.073s 00:37:20.988 14:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:20.988 14:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:20.988 ************************************ 00:37:20.988 END TEST kernel_target_abort 00:37:20.988 ************************************ 00:37:20.988 14:24:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:20.988 14:24:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:20.988 14:24:19 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:20.988 14:24:19 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:20.988 14:24:19 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:20.988 14:24:19 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:20.988 14:24:19 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:20.988 14:24:19 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:20.988 rmmod nvme_tcp 00:37:20.988 rmmod nvme_fabrics 00:37:20.988 rmmod nvme_keyring 00:37:20.988 14:24:19 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:20.988 14:24:19 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:20.988 14:24:19 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:20.988 14:24:19 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1342423 ']' 00:37:20.988 14:24:19 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1342423 00:37:20.988 14:24:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1342423 ']' 00:37:20.988 14:24:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1342423 00:37:20.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1342423) - No such process 00:37:20.989 14:24:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1342423 is not found' 00:37:20.989 Process with pid 1342423 is not found 00:37:20.989 14:24:19 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:20.989 14:24:19 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:24.292 Waiting for block devices as requested 00:37:24.292 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:24.292 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:24.554 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:24.554 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:24.554 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:24.815 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:24.815 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:24.815 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:24.815 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:25.076 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:25.076 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:25.337 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:25.337 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:25.337 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:25.599 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:25.599 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:25.599 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:25.860 14:24:24 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:25.860 14:24:24 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:25.860 14:24:24 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:37:25.860 14:24:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:37:25.860 14:24:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:25.860 14:24:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:37:26.122 14:24:24 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:26.122 14:24:24 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:26.122 14:24:24 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:26.122 14:24:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:26.122 14:24:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:28.202 14:24:26 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:28.202 00:37:28.202 real 0m52.676s 00:37:28.202 user 1m5.120s 00:37:28.202 sys 0m19.317s 00:37:28.202 14:24:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:28.202 14:24:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:28.202 ************************************ 00:37:28.202 END TEST nvmf_abort_qd_sizes 00:37:28.202 ************************************ 00:37:28.202 14:24:26 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:28.202 14:24:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:28.202 14:24:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:28.202 14:24:26 -- common/autotest_common.sh@10 -- # set +x 00:37:28.202 ************************************ 00:37:28.202 START TEST keyring_file 00:37:28.202 ************************************ 00:37:28.202 14:24:26 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:28.202 * Looking for test storage... 00:37:28.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:28.202 14:24:26 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:28.202 14:24:26 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:37:28.202 14:24:26 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:28.463 14:24:26 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:28.463 14:24:26 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@345 -- # : 1 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@353 -- # local d=1 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@355 -- # echo 1 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@353 -- # local d=2 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@355 -- # echo 2 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@368 -- # return 0 00:37:28.464 14:24:26 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:28.464 14:24:26 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:28.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:28.464 --rc genhtml_branch_coverage=1 00:37:28.464 --rc genhtml_function_coverage=1 00:37:28.464 --rc genhtml_legend=1 00:37:28.464 --rc geninfo_all_blocks=1 00:37:28.464 --rc geninfo_unexecuted_blocks=1 00:37:28.464 00:37:28.464 ' 00:37:28.464 14:24:26 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:28.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:28.464 --rc genhtml_branch_coverage=1 00:37:28.464 --rc genhtml_function_coverage=1 00:37:28.464 --rc genhtml_legend=1 00:37:28.464 --rc geninfo_all_blocks=1 00:37:28.464 --rc geninfo_unexecuted_blocks=1 00:37:28.464 00:37:28.464 ' 00:37:28.464 14:24:26 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:28.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:28.464 --rc genhtml_branch_coverage=1 00:37:28.464 --rc genhtml_function_coverage=1 00:37:28.464 --rc genhtml_legend=1 00:37:28.464 --rc geninfo_all_blocks=1 00:37:28.464 --rc geninfo_unexecuted_blocks=1 00:37:28.464 00:37:28.464 ' 00:37:28.464 14:24:26 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:28.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:28.464 --rc genhtml_branch_coverage=1 00:37:28.464 --rc genhtml_function_coverage=1 00:37:28.464 --rc genhtml_legend=1 00:37:28.464 --rc geninfo_all_blocks=1 00:37:28.464 --rc geninfo_unexecuted_blocks=1 00:37:28.464 00:37:28.464 ' 00:37:28.464 14:24:26 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:28.464 14:24:26 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:28.464 14:24:26 keyring_file -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:28.464 14:24:26 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.464 14:24:26 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.464 14:24:26 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.464 14:24:26 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:28.464 14:24:26 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@51 -- # : 0 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:28.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:28.464 14:24:26 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:28.464 14:24:26 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:28.464 14:24:26 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:28.464 14:24:26 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:28.464 14:24:26 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:28.464 14:24:26 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:28.464 14:24:26 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:28.464 14:24:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:28.464 14:24:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:28.464 14:24:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:28.464 14:24:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:28.464 14:24:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:28.464 14:24:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.JsuxDZ9p0S 00:37:28.464 14:24:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:28.464 14:24:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.JsuxDZ9p0S 00:37:28.464 14:24:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.JsuxDZ9p0S 00:37:28.464 14:24:26 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.JsuxDZ9p0S 00:37:28.464 14:24:26 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:28.464 14:24:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:28.464 14:24:26 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:28.464 14:24:26 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:28.464 14:24:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:28.464 14:24:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:28.464 14:24:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YwHKk9wIHK 00:37:28.464 14:24:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:37:28.464 14:24:26 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:28.465 14:24:26 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:28.465 14:24:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YwHKk9wIHK 00:37:28.465 14:24:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YwHKk9wIHK 00:37:28.465 14:24:26 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.YwHKk9wIHK 00:37:28.465 14:24:26 keyring_file -- keyring/file.sh@30 -- # tgtpid=1353485 00:37:28.465 14:24:26 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1353485 00:37:28.465 14:24:26 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:28.465 14:24:26 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1353485 ']' 00:37:28.465 14:24:26 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:28.465 14:24:26 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:28.465 14:24:26 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:28.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:28.465 14:24:26 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:28.465 14:24:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:28.465 [2024-10-30 14:24:26.734785] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:37:28.465 [2024-10-30 14:24:26.734864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1353485 ] 00:37:28.726 [2024-10-30 14:24:26.826928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:28.726 [2024-10-30 14:24:26.879599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:29.297 14:24:27 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:29.297 14:24:27 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:29.297 14:24:27 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:29.297 14:24:27 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.297 14:24:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:29.297 [2024-10-30 14:24:27.543895] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:29.297 null0 00:37:29.297 [2024-10-30 14:24:27.575942] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:29.297 [2024-10-30 14:24:27.576493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:29.559 14:24:27 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.559 14:24:27 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:29.559 14:24:27 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:29.559 14:24:27 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:29.559 14:24:27 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:29.559 14:24:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:29.559 14:24:27 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:29.559 14:24:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:29.559 14:24:27 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:29.559 14:24:27 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.559 14:24:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:29.559 [2024-10-30 14:24:27.608011] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:29.559 request: 00:37:29.559 { 00:37:29.559 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:29.559 "secure_channel": false, 00:37:29.559 "listen_address": { 00:37:29.559 "trtype": "tcp", 00:37:29.559 "traddr": "127.0.0.1", 00:37:29.559 "trsvcid": "4420" 00:37:29.559 }, 00:37:29.559 "method": "nvmf_subsystem_add_listener", 00:37:29.559 "req_id": 1 00:37:29.559 } 00:37:29.559 Got JSON-RPC error response 00:37:29.559 response: 00:37:29.559 { 00:37:29.559 "code": -32602, 00:37:29.559 "message": "Invalid parameters" 00:37:29.559 } 00:37:29.559 14:24:27 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:29.559 14:24:27 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:29.559 14:24:27 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:29.559 14:24:27 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:29.559 14:24:27 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:29.559 14:24:27 keyring_file -- keyring/file.sh@47 -- # bperfpid=1353540 00:37:29.559 14:24:27 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1353540 /var/tmp/bperf.sock 00:37:29.559 14:24:27 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1353540 ']' 00:37:29.559 14:24:27 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:29.559 14:24:27 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:29.559 14:24:27 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:29.559 14:24:27 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:29.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:29.559 14:24:27 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:29.559 14:24:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:29.559 [2024-10-30 14:24:27.670668] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:37:29.559 [2024-10-30 14:24:27.670732] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1353540 ] 00:37:29.559 [2024-10-30 14:24:27.760899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:29.559 [2024-10-30 14:24:27.813677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:30.504 14:24:28 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:30.504 14:24:28 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:30.504 14:24:28 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JsuxDZ9p0S 00:37:30.504 14:24:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JsuxDZ9p0S 00:37:30.504 14:24:28 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.YwHKk9wIHK 00:37:30.504 14:24:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.YwHKk9wIHK 00:37:30.765 14:24:28 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:37:30.765 14:24:28 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:30.765 14:24:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:30.765 14:24:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:30.765 14:24:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:30.765 14:24:29 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.JsuxDZ9p0S == \/\t\m\p\/\t\m\p\.\J\s\u\x\D\Z\9\p\0\S ]] 00:37:30.765 14:24:29 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:37:30.765 14:24:29 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:37:30.765 14:24:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:30.765 14:24:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:30.765 14:24:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:31.026 14:24:29 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.YwHKk9wIHK == \/\t\m\p\/\t\m\p\.\Y\w\H\K\k\9\w\I\H\K ]] 00:37:31.026 14:24:29 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:37:31.026 14:24:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:31.026 14:24:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:31.026 14:24:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:31.026 14:24:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:31.026 14:24:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:31.286 14:24:29 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:31.286 14:24:29 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:37:31.286 14:24:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:31.286 14:24:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:31.286 14:24:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:31.286 14:24:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:31.286 14:24:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:31.554 14:24:29 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:37:31.554 14:24:29 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:31.554 14:24:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:31.554 [2024-10-30 14:24:29.792911] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:31.816 nvme0n1 00:37:31.816 14:24:29 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:37:31.816 14:24:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:31.816 14:24:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:31.816 14:24:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:31.816 14:24:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:31.816 14:24:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:31.816 14:24:30 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:37:31.816 14:24:30 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:37:31.816 14:24:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:31.816 14:24:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:31.816 14:24:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:31.816 14:24:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:31.816 14:24:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:32.078 14:24:30 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:37:32.078 14:24:30 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:32.078 Running I/O for 1 seconds... 00:37:33.459 19453.00 IOPS, 75.99 MiB/s 00:37:33.459 Latency(us) 00:37:33.459 [2024-10-30T13:24:31.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:33.459 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:33.459 nvme0n1 : 1.00 19509.42 76.21 0.00 0.00 6549.76 3659.09 18240.85 00:37:33.459 [2024-10-30T13:24:31.758Z] =================================================================================================================== 00:37:33.459 [2024-10-30T13:24:31.758Z] Total : 19509.42 76.21 0.00 0.00 6549.76 3659.09 18240.85 00:37:33.459 { 00:37:33.459 "results": [ 00:37:33.459 { 00:37:33.459 "job": "nvme0n1", 00:37:33.459 "core_mask": "0x2", 00:37:33.459 "workload": "randrw", 00:37:33.459 "percentage": 50, 00:37:33.459 "status": "finished", 00:37:33.459 "queue_depth": 128, 00:37:33.459 "io_size": 4096, 00:37:33.459 "runtime": 1.003669, 00:37:33.459 "iops": 19509.419938246574, 00:37:33.459 "mibps": 76.20867163377568, 00:37:33.459 "io_failed": 0, 00:37:33.459 "io_timeout": 0, 00:37:33.459 "avg_latency_us": 6549.757319850875, 00:37:33.459 "min_latency_us": 3659.0933333333332, 00:37:33.459 "max_latency_us": 18240.853333333333 00:37:33.459 } 00:37:33.459 ], 00:37:33.459 "core_count": 1 00:37:33.459 } 00:37:33.459 14:24:31 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:33.459 14:24:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:33.459 14:24:31 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:37:33.459 14:24:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:33.459 14:24:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:33.459 14:24:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:33.459 14:24:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:33.459 14:24:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:33.739 14:24:31 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:33.739 14:24:31 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:37:33.739 14:24:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:33.739 14:24:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:33.739 14:24:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:33.739 14:24:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:33.739 14:24:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:33.739 14:24:31 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:37:33.739 14:24:31 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:33.739 14:24:31 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:33.739 14:24:31 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:33.739 14:24:31 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:33.739 14:24:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:33.739 14:24:31 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:33.739 14:24:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:33.739 14:24:31 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:33.739 14:24:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:34.000 [2024-10-30 14:24:32.099980] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:34.000 [2024-10-30 14:24:32.100731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcda70 (107): Transport endpoint is not connected 00:37:34.000 [2024-10-30 14:24:32.101726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcda70 (9): Bad file descriptor 00:37:34.000 [2024-10-30 14:24:32.102728] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:37:34.000 [2024-10-30 14:24:32.102735] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:34.000 [2024-10-30 14:24:32.102741] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:34.000 [2024-10-30 14:24:32.102751] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:37:34.000 request: 00:37:34.000 { 00:37:34.000 "name": "nvme0", 00:37:34.000 "trtype": "tcp", 00:37:34.000 "traddr": "127.0.0.1", 00:37:34.000 "adrfam": "ipv4", 00:37:34.000 "trsvcid": "4420", 00:37:34.000 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:34.000 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:34.000 "prchk_reftag": false, 00:37:34.000 "prchk_guard": false, 00:37:34.000 "hdgst": false, 00:37:34.000 "ddgst": false, 00:37:34.000 "psk": "key1", 00:37:34.000 "allow_unrecognized_csi": false, 00:37:34.000 "method": "bdev_nvme_attach_controller", 00:37:34.000 "req_id": 1 00:37:34.000 } 00:37:34.000 Got JSON-RPC error response 00:37:34.000 response: 00:37:34.000 { 00:37:34.000 "code": -5, 00:37:34.000 "message": "Input/output error" 00:37:34.000 } 00:37:34.000 14:24:32 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:34.000 14:24:32 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:34.000 14:24:32 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:34.000 14:24:32 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:34.000 14:24:32 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:37:34.000 14:24:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:34.000 14:24:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:34.000 14:24:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:34.000 14:24:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:34.000 14:24:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.000 14:24:32 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:34.000 14:24:32 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:37:34.000 14:24:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:34.000 14:24:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:34.000 14:24:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:34.000 14:24:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:34.000 14:24:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.260 14:24:32 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:37:34.260 14:24:32 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:37:34.260 14:24:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:34.519 14:24:32 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:37:34.519 14:24:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:34.519 14:24:32 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:37:34.519 14:24:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.519 14:24:32 keyring_file -- keyring/file.sh@78 -- # jq length 00:37:34.779 14:24:32 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:37:34.779 14:24:32 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.JsuxDZ9p0S 00:37:34.779 14:24:32 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.JsuxDZ9p0S 00:37:34.779 14:24:32 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:34.779 14:24:32 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.JsuxDZ9p0S 00:37:34.779 14:24:32 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:34.779 14:24:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:34.779 14:24:32 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:34.779 14:24:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:34.779 14:24:32 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JsuxDZ9p0S 00:37:34.779 14:24:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JsuxDZ9p0S 00:37:35.039 [2024-10-30 14:24:33.131394] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.JsuxDZ9p0S': 0100660 00:37:35.039 [2024-10-30 14:24:33.131413] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:35.039 request: 00:37:35.039 { 00:37:35.039 "name": "key0", 00:37:35.039 "path": "/tmp/tmp.JsuxDZ9p0S", 00:37:35.039 "method": "keyring_file_add_key", 00:37:35.039 "req_id": 1 00:37:35.039 } 00:37:35.039 Got JSON-RPC error response 00:37:35.039 response: 00:37:35.039 { 00:37:35.039 "code": -1, 00:37:35.039 "message": "Operation not permitted" 00:37:35.039 } 00:37:35.039 14:24:33 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:35.039 14:24:33 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:35.039 14:24:33 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:35.039 14:24:33 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:35.039 14:24:33 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.JsuxDZ9p0S 00:37:35.039 14:24:33 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JsuxDZ9p0S 00:37:35.039 14:24:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JsuxDZ9p0S 00:37:35.039 14:24:33 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.JsuxDZ9p0S 00:37:35.039 14:24:33 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:37:35.039 14:24:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:35.039 14:24:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:35.039 14:24:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:35.039 14:24:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:35.039 14:24:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:35.301 14:24:33 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:37:35.301 14:24:33 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:35.301 14:24:33 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:35.301 14:24:33 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:35.301 14:24:33 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:35.301 14:24:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:35.301 14:24:33 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:35.301 14:24:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:35.301 14:24:33 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:35.301 14:24:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:35.562 [2024-10-30 14:24:33.656730] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.JsuxDZ9p0S': No such file or directory 00:37:35.562 [2024-10-30 14:24:33.656743] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:35.562 [2024-10-30 14:24:33.656760] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:35.562 [2024-10-30 14:24:33.656765] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:37:35.562 [2024-10-30 14:24:33.656775] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:35.562 [2024-10-30 14:24:33.656780] bdev_nvme.c:6576:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:35.562 request: 00:37:35.562 { 00:37:35.562 "name": "nvme0", 00:37:35.562 "trtype": "tcp", 00:37:35.562 "traddr": "127.0.0.1", 00:37:35.562 "adrfam": "ipv4", 00:37:35.562 "trsvcid": "4420", 00:37:35.562 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:35.562 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:35.562 "prchk_reftag": false, 00:37:35.562 "prchk_guard": false, 00:37:35.562 "hdgst": false, 00:37:35.562 "ddgst": false, 00:37:35.562 "psk": "key0", 00:37:35.562 "allow_unrecognized_csi": false, 00:37:35.562 "method": "bdev_nvme_attach_controller", 00:37:35.562 "req_id": 1 00:37:35.562 } 00:37:35.562 Got JSON-RPC error response 00:37:35.562 response: 00:37:35.562 { 00:37:35.562 "code": -19, 00:37:35.562 "message": "No such device" 00:37:35.562 } 00:37:35.562 14:24:33 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:35.562 14:24:33 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:35.562 14:24:33 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:35.562 14:24:33 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:35.562 14:24:33 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:37:35.562 14:24:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:35.562 14:24:33 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:35.562 14:24:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:35.562 14:24:33 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:35.562 14:24:33 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:35.562 14:24:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:35.562 14:24:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:35.562 14:24:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.JPED2amIqF 00:37:35.562 14:24:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:35.562 14:24:33 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:35.562 14:24:33 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:35.562 14:24:33 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:35.562 14:24:33 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:35.562 14:24:33 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:35.562 14:24:33 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:35.824 14:24:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.JPED2amIqF 00:37:35.824 14:24:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.JPED2amIqF 00:37:35.824 14:24:33 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.JPED2amIqF 00:37:35.824 14:24:33 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JPED2amIqF 00:37:35.824 14:24:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JPED2amIqF 00:37:35.824 14:24:34 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:35.825 14:24:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:36.085 nvme0n1 00:37:36.085 14:24:34 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:37:36.085 14:24:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:36.085 14:24:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:36.085 14:24:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:36.085 14:24:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:36.085 14:24:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:36.345 14:24:34 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:37:36.345 14:24:34 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:37:36.345 14:24:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:36.607 14:24:34 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:37:36.607 14:24:34 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:37:36.607 14:24:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:36.607 14:24:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:36.607 14:24:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:36.607 14:24:34 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:37:36.607 14:24:34 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:37:36.607 14:24:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:36.607 14:24:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:36.607 14:24:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:36.607 14:24:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:36.607 14:24:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:36.867 14:24:35 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:37:36.867 14:24:35 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:36.867 14:24:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:37.127 14:24:35 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:37:37.127 14:24:35 keyring_file -- keyring/file.sh@105 -- # jq length 00:37:37.127 14:24:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:37.127 14:24:35 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:37:37.127 14:24:35 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JPED2amIqF 00:37:37.127 14:24:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JPED2amIqF 00:37:37.388 14:24:35 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.YwHKk9wIHK 00:37:37.388 14:24:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.YwHKk9wIHK 00:37:37.648 14:24:35 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:37.648 14:24:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:37.909 nvme0n1 00:37:37.909 14:24:36 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:37:37.909 14:24:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:38.172 14:24:36 keyring_file -- keyring/file.sh@113 -- # config='{ 00:37:38.172 "subsystems": [ 00:37:38.172 { 00:37:38.172 "subsystem": "keyring", 00:37:38.172 "config": [ 00:37:38.172 { 00:37:38.172 "method": "keyring_file_add_key", 00:37:38.172 "params": { 00:37:38.172 "name": "key0", 00:37:38.172 "path": "/tmp/tmp.JPED2amIqF" 00:37:38.172 } 00:37:38.172 }, 00:37:38.172 { 00:37:38.172 "method": "keyring_file_add_key", 00:37:38.172 "params": { 00:37:38.172 "name": "key1", 00:37:38.172 "path": "/tmp/tmp.YwHKk9wIHK" 00:37:38.172 } 00:37:38.172 } 00:37:38.172 ] 00:37:38.172 }, 00:37:38.172 { 00:37:38.172 "subsystem": "iobuf", 00:37:38.172 "config": [ 00:37:38.172 { 00:37:38.172 "method": "iobuf_set_options", 00:37:38.172 "params": { 00:37:38.172 "small_pool_count": 8192, 00:37:38.172 "large_pool_count": 1024, 00:37:38.172 "small_bufsize": 8192, 00:37:38.172 "large_bufsize": 135168, 00:37:38.172 "enable_numa": false 00:37:38.172 } 00:37:38.172 } 00:37:38.172 ] 00:37:38.172 }, 00:37:38.172 { 00:37:38.172 "subsystem": "sock", 00:37:38.172 "config": [ 00:37:38.172 { 00:37:38.172 "method": "sock_set_default_impl", 00:37:38.172 "params": { 00:37:38.172 "impl_name": "posix" 00:37:38.172 } 00:37:38.172 }, 00:37:38.172 { 00:37:38.172 "method": "sock_impl_set_options", 00:37:38.172 "params": { 00:37:38.172 "impl_name": "ssl", 00:37:38.172 "recv_buf_size": 4096, 00:37:38.172 "send_buf_size": 4096, 00:37:38.172 "enable_recv_pipe": true, 00:37:38.172 "enable_quickack": false, 00:37:38.172 "enable_placement_id": 0, 00:37:38.172 "enable_zerocopy_send_server": true, 00:37:38.172 "enable_zerocopy_send_client": false, 00:37:38.172 "zerocopy_threshold": 0, 00:37:38.172 "tls_version": 0, 00:37:38.172 "enable_ktls": false 00:37:38.172 } 00:37:38.172 }, 00:37:38.172 { 00:37:38.172 "method": "sock_impl_set_options", 00:37:38.172 "params": { 00:37:38.172 "impl_name": "posix", 00:37:38.172 "recv_buf_size": 2097152, 00:37:38.172 "send_buf_size": 2097152, 00:37:38.172 "enable_recv_pipe": true, 00:37:38.172 "enable_quickack": false, 00:37:38.172 "enable_placement_id": 0, 00:37:38.172 "enable_zerocopy_send_server": true, 00:37:38.172 "enable_zerocopy_send_client": false, 00:37:38.172 "zerocopy_threshold": 0, 00:37:38.172 "tls_version": 0, 00:37:38.172 "enable_ktls": false 00:37:38.172 } 00:37:38.172 } 00:37:38.172 ] 00:37:38.172 }, 00:37:38.172 { 00:37:38.172 "subsystem": "vmd", 00:37:38.172 "config": [] 00:37:38.172 }, 00:37:38.172 { 00:37:38.172 "subsystem": "accel", 00:37:38.172 "config": [ 00:37:38.172 { 00:37:38.172 "method": "accel_set_options", 00:37:38.172 "params": { 00:37:38.172 "small_cache_size": 128, 00:37:38.172 "large_cache_size": 16, 00:37:38.172 "task_count": 2048, 00:37:38.172 "sequence_count": 2048, 00:37:38.172 "buf_count": 2048 00:37:38.172 } 00:37:38.172 } 00:37:38.172 ] 00:37:38.172 }, 00:37:38.172 { 00:37:38.172 "subsystem": "bdev", 00:37:38.172 "config": [ 00:37:38.172 { 00:37:38.172 "method": "bdev_set_options", 00:37:38.172 "params": { 00:37:38.172 "bdev_io_pool_size": 65535, 00:37:38.172 "bdev_io_cache_size": 256, 00:37:38.172 "bdev_auto_examine": true, 00:37:38.172 "iobuf_small_cache_size": 128, 00:37:38.172 "iobuf_large_cache_size": 16 00:37:38.172 } 00:37:38.172 }, 00:37:38.172 { 00:37:38.172 "method": "bdev_raid_set_options", 00:37:38.172 "params": { 00:37:38.172 "process_window_size_kb": 1024, 00:37:38.172 "process_max_bandwidth_mb_sec": 0 00:37:38.172 } 00:37:38.172 }, 00:37:38.172 { 00:37:38.172 "method": "bdev_iscsi_set_options", 00:37:38.172 "params": { 00:37:38.172 "timeout_sec": 30 00:37:38.172 } 00:37:38.172 }, 00:37:38.172 { 00:37:38.172 "method": "bdev_nvme_set_options", 00:37:38.172 "params": { 00:37:38.172 "action_on_timeout": "none", 00:37:38.172 "timeout_us": 0, 00:37:38.172 "timeout_admin_us": 0, 00:37:38.172 "keep_alive_timeout_ms": 10000, 00:37:38.172 "arbitration_burst": 0, 00:37:38.172 "low_priority_weight": 0, 00:37:38.172 "medium_priority_weight": 0, 00:37:38.172 "high_priority_weight": 0, 00:37:38.172 "nvme_adminq_poll_period_us": 10000, 00:37:38.172 "nvme_ioq_poll_period_us": 0, 00:37:38.172 "io_queue_requests": 512, 00:37:38.172 "delay_cmd_submit": true, 00:37:38.172 "transport_retry_count": 4, 00:37:38.172 "bdev_retry_count": 3, 00:37:38.172 "transport_ack_timeout": 0, 00:37:38.172 "ctrlr_loss_timeout_sec": 0, 00:37:38.172 "reconnect_delay_sec": 0, 00:37:38.172 "fast_io_fail_timeout_sec": 0, 00:37:38.172 "disable_auto_failback": false, 00:37:38.172 "generate_uuids": false, 00:37:38.172 "transport_tos": 0, 00:37:38.172 "nvme_error_stat": false, 00:37:38.172 "rdma_srq_size": 0, 00:37:38.172 "io_path_stat": false, 00:37:38.172 "allow_accel_sequence": false, 00:37:38.172 "rdma_max_cq_size": 0, 00:37:38.173 "rdma_cm_event_timeout_ms": 0, 00:37:38.173 "dhchap_digests": [ 00:37:38.173 "sha256", 00:37:38.173 "sha384", 00:37:38.173 "sha512" 00:37:38.173 ], 00:37:38.173 "dhchap_dhgroups": [ 00:37:38.173 "null", 00:37:38.173 "ffdhe2048", 00:37:38.173 "ffdhe3072", 00:37:38.173 "ffdhe4096", 00:37:38.173 "ffdhe6144", 00:37:38.173 "ffdhe8192" 00:37:38.173 ] 00:37:38.173 } 00:37:38.173 }, 00:37:38.173 { 00:37:38.173 "method": "bdev_nvme_attach_controller", 00:37:38.173 "params": { 00:37:38.173 "name": "nvme0", 00:37:38.173 "trtype": "TCP", 00:37:38.173 "adrfam": "IPv4", 00:37:38.173 "traddr": "127.0.0.1", 00:37:38.173 "trsvcid": "4420", 00:37:38.173 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:38.173 "prchk_reftag": false, 00:37:38.173 "prchk_guard": false, 00:37:38.173 "ctrlr_loss_timeout_sec": 0, 00:37:38.173 "reconnect_delay_sec": 0, 00:37:38.173 "fast_io_fail_timeout_sec": 0, 00:37:38.173 "psk": "key0", 00:37:38.173 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:38.173 "hdgst": false, 00:37:38.173 "ddgst": false, 00:37:38.173 "multipath": "multipath" 00:37:38.173 } 00:37:38.173 }, 00:37:38.173 { 00:37:38.173 "method": "bdev_nvme_set_hotplug", 00:37:38.173 "params": { 00:37:38.173 "period_us": 100000, 00:37:38.173 "enable": false 00:37:38.173 } 00:37:38.173 }, 00:37:38.173 { 00:37:38.173 "method": "bdev_wait_for_examine" 00:37:38.173 } 00:37:38.173 ] 00:37:38.173 }, 00:37:38.173 { 00:37:38.173 "subsystem": "nbd", 00:37:38.173 "config": [] 00:37:38.173 } 00:37:38.173 ] 00:37:38.173 }' 00:37:38.173 14:24:36 keyring_file -- keyring/file.sh@115 -- # killprocess 1353540 00:37:38.173 14:24:36 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1353540 ']' 00:37:38.173 14:24:36 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1353540 00:37:38.173 14:24:36 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:38.173 14:24:36 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:38.173 14:24:36 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1353540 00:37:38.173 14:24:36 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:38.173 14:24:36 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:38.173 14:24:36 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1353540' 00:37:38.173 killing process with pid 1353540 00:37:38.173 14:24:36 keyring_file -- common/autotest_common.sh@973 -- # kill 1353540 00:37:38.173 Received shutdown signal, test time was about 1.000000 seconds 00:37:38.173 00:37:38.173 Latency(us) 00:37:38.173 [2024-10-30T13:24:36.472Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:38.173 [2024-10-30T13:24:36.472Z] =================================================================================================================== 00:37:38.173 [2024-10-30T13:24:36.472Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:38.173 14:24:36 keyring_file -- common/autotest_common.sh@978 -- # wait 1353540 00:37:38.173 14:24:36 keyring_file -- keyring/file.sh@118 -- # bperfpid=1355352 00:37:38.173 14:24:36 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1355352 /var/tmp/bperf.sock 00:37:38.173 14:24:36 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1355352 ']' 00:37:38.173 14:24:36 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:38.173 14:24:36 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:38.173 14:24:36 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:38.173 14:24:36 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:38.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:38.173 14:24:36 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:38.173 14:24:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:38.173 14:24:36 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:37:38.173 "subsystems": [ 00:37:38.173 { 00:37:38.173 "subsystem": "keyring", 00:37:38.173 "config": [ 00:37:38.173 { 00:37:38.173 "method": "keyring_file_add_key", 00:37:38.173 "params": { 00:37:38.173 "name": "key0", 00:37:38.173 "path": "/tmp/tmp.JPED2amIqF" 00:37:38.173 } 00:37:38.173 }, 00:37:38.173 { 00:37:38.173 "method": "keyring_file_add_key", 00:37:38.173 "params": { 00:37:38.173 "name": "key1", 00:37:38.173 "path": "/tmp/tmp.YwHKk9wIHK" 00:37:38.173 } 00:37:38.173 } 00:37:38.173 ] 00:37:38.173 }, 00:37:38.173 { 00:37:38.173 "subsystem": "iobuf", 00:37:38.173 "config": [ 00:37:38.173 { 00:37:38.173 "method": "iobuf_set_options", 00:37:38.173 "params": { 00:37:38.173 "small_pool_count": 8192, 00:37:38.173 "large_pool_count": 1024, 00:37:38.173 "small_bufsize": 8192, 00:37:38.173 "large_bufsize": 135168, 00:37:38.173 "enable_numa": false 00:37:38.173 } 00:37:38.173 } 00:37:38.173 ] 00:37:38.173 }, 00:37:38.173 { 00:37:38.173 "subsystem": "sock", 00:37:38.173 "config": [ 00:37:38.173 { 00:37:38.173 "method": "sock_set_default_impl", 00:37:38.173 "params": { 00:37:38.173 "impl_name": "posix" 00:37:38.173 } 00:37:38.173 }, 00:37:38.173 { 00:37:38.173 "method": "sock_impl_set_options", 00:37:38.173 "params": { 00:37:38.173 "impl_name": "ssl", 00:37:38.173 "recv_buf_size": 4096, 00:37:38.173 "send_buf_size": 4096, 00:37:38.173 "enable_recv_pipe": true, 00:37:38.173 "enable_quickack": false, 00:37:38.173 "enable_placement_id": 0, 00:37:38.173 "enable_zerocopy_send_server": true, 00:37:38.173 "enable_zerocopy_send_client": false, 00:37:38.173 "zerocopy_threshold": 0, 00:37:38.173 "tls_version": 0, 00:37:38.173 "enable_ktls": false 00:37:38.173 } 00:37:38.173 }, 00:37:38.173 { 00:37:38.173 "method": "sock_impl_set_options", 00:37:38.173 "params": { 00:37:38.173 "impl_name": "posix", 00:37:38.173 "recv_buf_size": 2097152, 00:37:38.173 "send_buf_size": 2097152, 00:37:38.173 "enable_recv_pipe": true, 00:37:38.173 "enable_quickack": false, 00:37:38.173 "enable_placement_id": 0, 00:37:38.173 "enable_zerocopy_send_server": true, 00:37:38.173 "enable_zerocopy_send_client": false, 00:37:38.173 "zerocopy_threshold": 0, 00:37:38.173 "tls_version": 0, 00:37:38.173 "enable_ktls": false 00:37:38.173 } 00:37:38.173 } 00:37:38.173 ] 00:37:38.173 }, 00:37:38.173 { 00:37:38.173 "subsystem": "vmd", 00:37:38.173 "config": [] 00:37:38.173 }, 00:37:38.173 { 00:37:38.173 "subsystem": "accel", 00:37:38.173 "config": [ 00:37:38.173 { 00:37:38.173 "method": "accel_set_options", 00:37:38.173 "params": { 00:37:38.173 "small_cache_size": 128, 00:37:38.173 "large_cache_size": 16, 00:37:38.173 "task_count": 2048, 00:37:38.173 "sequence_count": 2048, 00:37:38.173 "buf_count": 2048 00:37:38.173 } 00:37:38.173 } 00:37:38.173 ] 00:37:38.173 }, 00:37:38.173 { 00:37:38.173 "subsystem": "bdev", 00:37:38.173 "config": [ 00:37:38.173 { 00:37:38.173 "method": "bdev_set_options", 00:37:38.173 "params": { 00:37:38.173 "bdev_io_pool_size": 65535, 00:37:38.173 "bdev_io_cache_size": 256, 00:37:38.173 "bdev_auto_examine": true, 00:37:38.173 "iobuf_small_cache_size": 128, 00:37:38.173 "iobuf_large_cache_size": 16 00:37:38.173 } 00:37:38.173 }, 00:37:38.173 { 00:37:38.173 "method": "bdev_raid_set_options", 00:37:38.173 "params": { 00:37:38.173 "process_window_size_kb": 1024, 00:37:38.173 "process_max_bandwidth_mb_sec": 0 00:37:38.173 } 00:37:38.173 }, 00:37:38.173 { 00:37:38.173 "method": "bdev_iscsi_set_options", 00:37:38.173 "params": { 00:37:38.173 "timeout_sec": 30 00:37:38.173 } 00:37:38.173 }, 00:37:38.173 { 00:37:38.173 "method": "bdev_nvme_set_options", 00:37:38.173 "params": { 00:37:38.173 "action_on_timeout": "none", 00:37:38.173 "timeout_us": 0, 00:37:38.173 "timeout_admin_us": 0, 00:37:38.173 "keep_alive_timeout_ms": 10000, 00:37:38.173 "arbitration_burst": 0, 00:37:38.173 "low_priority_weight": 0, 00:37:38.173 "medium_priority_weight": 0, 00:37:38.173 "high_priority_weight": 0, 00:37:38.173 "nvme_adminq_poll_period_us": 10000, 00:37:38.173 "nvme_ioq_poll_period_us": 0, 00:37:38.173 "io_queue_requests": 512, 00:37:38.173 "delay_cmd_submit": true, 00:37:38.173 "transport_retry_count": 4, 00:37:38.173 "bdev_retry_count": 3, 00:37:38.173 "transport_ack_timeout": 0, 00:37:38.173 "ctrlr_loss_timeout_sec": 0, 00:37:38.173 "reconnect_delay_sec": 0, 00:37:38.173 "fast_io_fail_timeout_sec": 0, 00:37:38.174 "disable_auto_failback": false, 00:37:38.174 "generate_uuids": false, 00:37:38.174 "transport_tos": 0, 00:37:38.174 "nvme_error_stat": false, 00:37:38.174 "rdma_srq_size": 0, 00:37:38.174 "io_path_stat": false, 00:37:38.174 "allow_accel_sequence": false, 00:37:38.174 "rdma_max_cq_size": 0, 00:37:38.174 "rdma_cm_event_timeout_ms": 0, 00:37:38.174 "dhchap_digests": [ 00:37:38.174 "sha256", 00:37:38.174 "sha384", 00:37:38.174 "sha512" 00:37:38.174 ], 00:37:38.174 "dhchap_dhgroups": [ 00:37:38.174 "null", 00:37:38.174 "ffdhe2048", 00:37:38.174 "ffdhe3072", 00:37:38.174 "ffdhe4096", 00:37:38.174 "ffdhe6144", 00:37:38.174 "ffdhe8192" 00:37:38.174 ] 00:37:38.174 } 00:37:38.174 }, 00:37:38.174 { 00:37:38.174 "method": "bdev_nvme_attach_controller", 00:37:38.174 "params": { 00:37:38.174 "name": "nvme0", 00:37:38.174 "trtype": "TCP", 00:37:38.174 "adrfam": "IPv4", 00:37:38.174 "traddr": "127.0.0.1", 00:37:38.174 "trsvcid": "4420", 00:37:38.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:38.174 "prchk_reftag": false, 00:37:38.174 "prchk_guard": false, 00:37:38.174 "ctrlr_loss_timeout_sec": 0, 00:37:38.174 "reconnect_delay_sec": 0, 00:37:38.174 "fast_io_fail_timeout_sec": 0, 00:37:38.174 "psk": "key0", 00:37:38.174 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:38.174 "hdgst": false, 00:37:38.174 "ddgst": false, 00:37:38.174 "multipath": "multipath" 00:37:38.174 } 00:37:38.174 }, 00:37:38.174 { 00:37:38.174 "method": "bdev_nvme_set_hotplug", 00:37:38.174 "params": { 00:37:38.174 "period_us": 100000, 00:37:38.174 "enable": false 00:37:38.174 } 00:37:38.174 }, 00:37:38.174 { 00:37:38.174 "method": "bdev_wait_for_examine" 00:37:38.174 } 00:37:38.174 ] 00:37:38.174 }, 00:37:38.174 { 00:37:38.174 "subsystem": "nbd", 00:37:38.174 "config": [] 00:37:38.174 } 00:37:38.174 ] 00:37:38.174 }' 00:37:38.174 [2024-10-30 14:24:36.467806] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:37:38.174 [2024-10-30 14:24:36.467862] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1355352 ] 00:37:38.435 [2024-10-30 14:24:36.550476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:38.435 [2024-10-30 14:24:36.579946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:38.435 [2024-10-30 14:24:36.722618] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:39.007 14:24:37 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:39.007 14:24:37 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:39.007 14:24:37 keyring_file -- keyring/file.sh@121 -- # jq length 00:37:39.007 14:24:37 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:37:39.007 14:24:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:39.268 14:24:37 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:39.268 14:24:37 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:37:39.268 14:24:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:39.268 14:24:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:39.268 14:24:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:39.268 14:24:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:39.268 14:24:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:39.529 14:24:37 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:37:39.529 14:24:37 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:37:39.529 14:24:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:39.529 14:24:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:39.529 14:24:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:39.529 14:24:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:39.529 14:24:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:39.529 14:24:37 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:37:39.529 14:24:37 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:37:39.529 14:24:37 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:37:39.529 14:24:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:39.789 14:24:37 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:37:39.789 14:24:37 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:39.789 14:24:37 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.JPED2amIqF /tmp/tmp.YwHKk9wIHK 00:37:39.789 14:24:37 keyring_file -- keyring/file.sh@20 -- # killprocess 1355352 00:37:39.789 14:24:37 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1355352 ']' 00:37:39.789 14:24:37 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1355352 00:37:39.789 14:24:37 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:39.789 14:24:37 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:39.789 14:24:37 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1355352 00:37:39.789 14:24:38 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:39.789 14:24:38 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:39.789 14:24:38 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1355352' 00:37:39.789 killing process with pid 1355352 00:37:39.789 14:24:38 keyring_file -- common/autotest_common.sh@973 -- # kill 1355352 00:37:39.789 Received shutdown signal, test time was about 1.000000 seconds 00:37:39.789 00:37:39.789 Latency(us) 00:37:39.789 [2024-10-30T13:24:38.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:39.789 [2024-10-30T13:24:38.088Z] =================================================================================================================== 00:37:39.789 [2024-10-30T13:24:38.088Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:39.789 14:24:38 keyring_file -- common/autotest_common.sh@978 -- # wait 1355352 00:37:40.050 14:24:38 keyring_file -- keyring/file.sh@21 -- # killprocess 1353485 00:37:40.050 14:24:38 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1353485 ']' 00:37:40.050 14:24:38 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1353485 00:37:40.050 14:24:38 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:40.050 14:24:38 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:40.050 14:24:38 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1353485 00:37:40.050 14:24:38 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:40.050 14:24:38 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:40.050 14:24:38 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1353485' 00:37:40.050 killing process with pid 1353485 00:37:40.050 14:24:38 keyring_file -- common/autotest_common.sh@973 -- # kill 1353485 00:37:40.050 14:24:38 keyring_file -- common/autotest_common.sh@978 -- # wait 1353485 00:37:40.312 00:37:40.312 real 0m12.073s 00:37:40.312 user 0m29.170s 00:37:40.312 sys 0m2.709s 00:37:40.312 14:24:38 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:40.312 14:24:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:40.312 ************************************ 00:37:40.312 END TEST keyring_file 00:37:40.312 ************************************ 00:37:40.312 14:24:38 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:37:40.312 14:24:38 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:40.312 14:24:38 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:40.312 14:24:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:40.312 14:24:38 -- common/autotest_common.sh@10 -- # set +x 00:37:40.312 ************************************ 00:37:40.312 START TEST keyring_linux 00:37:40.312 ************************************ 00:37:40.312 14:24:38 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:40.312 Joined session keyring: 29200919 00:37:40.312 * Looking for test storage... 00:37:40.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:40.312 14:24:38 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:40.312 14:24:38 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:37:40.312 14:24:38 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:40.574 14:24:38 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:40.574 14:24:38 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:40.574 14:24:38 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:40.574 14:24:38 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:40.574 14:24:38 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:37:40.574 14:24:38 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:37:40.574 14:24:38 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@345 -- # : 1 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@368 -- # return 0 00:37:40.575 14:24:38 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:40.575 14:24:38 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:40.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.575 --rc genhtml_branch_coverage=1 00:37:40.575 --rc genhtml_function_coverage=1 00:37:40.575 --rc genhtml_legend=1 00:37:40.575 --rc geninfo_all_blocks=1 00:37:40.575 --rc geninfo_unexecuted_blocks=1 00:37:40.575 00:37:40.575 ' 00:37:40.575 14:24:38 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:40.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.575 --rc genhtml_branch_coverage=1 00:37:40.575 --rc genhtml_function_coverage=1 00:37:40.575 --rc genhtml_legend=1 00:37:40.575 --rc geninfo_all_blocks=1 00:37:40.575 --rc geninfo_unexecuted_blocks=1 00:37:40.575 00:37:40.575 ' 00:37:40.575 14:24:38 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:40.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.575 --rc genhtml_branch_coverage=1 00:37:40.575 --rc genhtml_function_coverage=1 00:37:40.575 --rc genhtml_legend=1 00:37:40.575 --rc geninfo_all_blocks=1 00:37:40.575 --rc geninfo_unexecuted_blocks=1 00:37:40.575 00:37:40.575 ' 00:37:40.575 14:24:38 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:40.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.575 --rc genhtml_branch_coverage=1 00:37:40.575 --rc genhtml_function_coverage=1 00:37:40.575 --rc genhtml_legend=1 00:37:40.575 --rc geninfo_all_blocks=1 00:37:40.575 --rc geninfo_unexecuted_blocks=1 00:37:40.575 00:37:40.575 ' 00:37:40.575 14:24:38 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:40.575 14:24:38 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@547 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@555 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:40.575 14:24:38 keyring_linux -- scripts/common.sh@556 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:40.575 14:24:38 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.575 14:24:38 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.575 14:24:38 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.575 14:24:38 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:40.575 14:24:38 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:40.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:40.575 14:24:38 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:40.575 14:24:38 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:40.575 14:24:38 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:40.575 14:24:38 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:40.575 14:24:38 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:40.575 14:24:38 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:40.575 14:24:38 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:40.575 14:24:38 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:40.575 14:24:38 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:40.575 14:24:38 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:40.575 14:24:38 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:40.575 14:24:38 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:40.575 14:24:38 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@733 -- # python - 00:37:40.575 14:24:38 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:40.575 14:24:38 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:40.575 /tmp/:spdk-test:key0 00:37:40.575 14:24:38 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:40.575 14:24:38 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:40.575 14:24:38 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:40.575 14:24:38 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:40.575 14:24:38 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:40.575 14:24:38 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:40.575 14:24:38 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:37:40.575 14:24:38 keyring_linux -- nvmf/common.sh@733 -- # python - 00:37:40.575 14:24:38 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:40.575 14:24:38 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:40.575 /tmp/:spdk-test:key1 00:37:40.575 14:24:38 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1355793 00:37:40.575 14:24:38 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1355793 00:37:40.576 14:24:38 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:40.576 14:24:38 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1355793 ']' 00:37:40.576 14:24:38 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:40.576 14:24:38 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:40.576 14:24:38 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:40.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:40.576 14:24:38 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:40.576 14:24:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:40.576 [2024-10-30 14:24:38.860303] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:37:40.576 [2024-10-30 14:24:38.860374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1355793 ] 00:37:40.837 [2024-10-30 14:24:38.949781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:40.837 [2024-10-30 14:24:38.985210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:41.409 14:24:39 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:41.409 14:24:39 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:37:41.409 14:24:39 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:41.409 14:24:39 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.409 14:24:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:41.409 [2024-10-30 14:24:39.675657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:41.409 null0 00:37:41.409 [2024-10-30 14:24:39.707707] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:41.409 [2024-10-30 14:24:39.708072] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:41.669 14:24:39 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.669 14:24:39 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:41.669 661023002 00:37:41.669 14:24:39 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:41.669 194884970 00:37:41.669 14:24:39 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1356124 00:37:41.669 14:24:39 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1356124 /var/tmp/bperf.sock 00:37:41.669 14:24:39 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:41.669 14:24:39 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1356124 ']' 00:37:41.669 14:24:39 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:41.669 14:24:39 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:41.669 14:24:39 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:41.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:41.669 14:24:39 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:41.669 14:24:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:41.669 [2024-10-30 14:24:39.795700] Starting SPDK v25.01-pre git sha1 1953a4915 / DPDK 24.03.0 initialization... 00:37:41.669 [2024-10-30 14:24:39.795754] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1356124 ] 00:37:41.669 [2024-10-30 14:24:39.879966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:41.669 [2024-10-30 14:24:39.909618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:42.610 14:24:40 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:42.610 14:24:40 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:37:42.610 14:24:40 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:42.610 14:24:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:42.610 14:24:40 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:42.611 14:24:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:42.872 14:24:40 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:42.872 14:24:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:42.872 [2024-10-30 14:24:41.137730] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:43.134 nvme0n1 00:37:43.134 14:24:41 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:43.134 14:24:41 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:43.134 14:24:41 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:43.134 14:24:41 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:43.134 14:24:41 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:43.134 14:24:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:43.134 14:24:41 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:43.134 14:24:41 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:43.134 14:24:41 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:43.134 14:24:41 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:43.134 14:24:41 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:43.134 14:24:41 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:43.134 14:24:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:43.396 14:24:41 keyring_linux -- keyring/linux.sh@25 -- # sn=661023002 00:37:43.396 14:24:41 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:43.396 14:24:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:43.396 14:24:41 keyring_linux -- keyring/linux.sh@26 -- # [[ 661023002 == \6\6\1\0\2\3\0\0\2 ]] 00:37:43.396 14:24:41 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 661023002 00:37:43.396 14:24:41 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:43.396 14:24:41 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:43.396 Running I/O for 1 seconds... 00:37:44.781 24419.00 IOPS, 95.39 MiB/s 00:37:44.781 Latency(us) 00:37:44.781 [2024-10-30T13:24:43.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:44.781 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:44.781 nvme0n1 : 1.01 24419.04 95.39 0.00 0.00 5226.97 3631.79 7973.55 00:37:44.781 [2024-10-30T13:24:43.080Z] =================================================================================================================== 00:37:44.781 [2024-10-30T13:24:43.080Z] Total : 24419.04 95.39 0.00 0.00 5226.97 3631.79 7973.55 00:37:44.781 { 00:37:44.781 "results": [ 00:37:44.781 { 00:37:44.781 "job": "nvme0n1", 00:37:44.781 "core_mask": "0x2", 00:37:44.781 "workload": "randread", 00:37:44.781 "status": "finished", 00:37:44.781 "queue_depth": 128, 00:37:44.781 "io_size": 4096, 00:37:44.781 "runtime": 1.00524, 00:37:44.781 "iops": 24419.044208348256, 00:37:44.781 "mibps": 95.38689143886037, 00:37:44.781 "io_failed": 0, 00:37:44.781 "io_timeout": 0, 00:37:44.781 "avg_latency_us": 5226.969002593663, 00:37:44.781 "min_latency_us": 3631.786666666667, 00:37:44.781 "max_latency_us": 7973.546666666667 00:37:44.781 } 00:37:44.781 ], 00:37:44.781 "core_count": 1 00:37:44.781 } 00:37:44.781 14:24:42 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:44.781 14:24:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:44.781 14:24:42 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:44.781 14:24:42 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:44.781 14:24:42 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:44.781 14:24:42 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:44.781 14:24:42 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:44.781 14:24:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:44.781 14:24:43 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:44.781 14:24:43 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:44.781 14:24:43 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:44.781 14:24:43 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:44.781 14:24:43 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:37:44.781 14:24:43 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:44.781 14:24:43 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:44.781 14:24:43 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:44.781 14:24:43 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:44.781 14:24:43 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:44.781 14:24:43 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:44.781 14:24:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:45.057 [2024-10-30 14:24:43.206896] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:45.057 [2024-10-30 14:24:43.207515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1805bb0 (107): Transport endpoint is not connected 00:37:45.057 [2024-10-30 14:24:43.208511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1805bb0 (9): Bad file descriptor 00:37:45.057 [2024-10-30 14:24:43.209514] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:37:45.057 [2024-10-30 14:24:43.209528] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:45.057 [2024-10-30 14:24:43.209533] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:45.057 [2024-10-30 14:24:43.209540] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:37:45.057 request: 00:37:45.057 { 00:37:45.057 "name": "nvme0", 00:37:45.057 "trtype": "tcp", 00:37:45.057 "traddr": "127.0.0.1", 00:37:45.057 "adrfam": "ipv4", 00:37:45.057 "trsvcid": "4420", 00:37:45.057 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:45.057 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:45.057 "prchk_reftag": false, 00:37:45.057 "prchk_guard": false, 00:37:45.057 "hdgst": false, 00:37:45.057 "ddgst": false, 00:37:45.057 "psk": ":spdk-test:key1", 00:37:45.057 "allow_unrecognized_csi": false, 00:37:45.057 "method": "bdev_nvme_attach_controller", 00:37:45.057 "req_id": 1 00:37:45.057 } 00:37:45.057 Got JSON-RPC error response 00:37:45.057 response: 00:37:45.057 { 00:37:45.057 "code": -5, 00:37:45.057 "message": "Input/output error" 00:37:45.057 } 00:37:45.057 14:24:43 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:37:45.057 14:24:43 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:45.057 14:24:43 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:45.057 14:24:43 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:45.057 14:24:43 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:45.057 14:24:43 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:45.057 14:24:43 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:45.057 14:24:43 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:45.057 14:24:43 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:45.057 14:24:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:45.057 14:24:43 keyring_linux -- keyring/linux.sh@33 -- # sn=661023002 00:37:45.057 14:24:43 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 661023002 00:37:45.057 1 links removed 00:37:45.057 14:24:43 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:45.057 14:24:43 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:45.057 14:24:43 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:45.057 14:24:43 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:45.057 14:24:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:45.057 14:24:43 keyring_linux -- keyring/linux.sh@33 -- # sn=194884970 00:37:45.057 14:24:43 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 194884970 00:37:45.057 1 links removed 00:37:45.057 14:24:43 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1356124 00:37:45.057 14:24:43 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1356124 ']' 00:37:45.057 14:24:43 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1356124 00:37:45.057 14:24:43 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:37:45.057 14:24:43 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:45.057 14:24:43 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1356124 00:37:45.057 14:24:43 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:45.057 14:24:43 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:45.057 14:24:43 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1356124' 00:37:45.057 killing process with pid 1356124 00:37:45.057 14:24:43 keyring_linux -- common/autotest_common.sh@973 -- # kill 1356124 00:37:45.057 Received shutdown signal, test time was about 1.000000 seconds 00:37:45.057 00:37:45.057 Latency(us) 00:37:45.057 [2024-10-30T13:24:43.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:45.057 [2024-10-30T13:24:43.356Z] =================================================================================================================== 00:37:45.057 [2024-10-30T13:24:43.356Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:45.057 14:24:43 keyring_linux -- common/autotest_common.sh@978 -- # wait 1356124 00:37:45.317 14:24:43 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1355793 00:37:45.317 14:24:43 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1355793 ']' 00:37:45.317 14:24:43 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1355793 00:37:45.317 14:24:43 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:37:45.317 14:24:43 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:45.317 14:24:43 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1355793 00:37:45.317 14:24:43 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:45.317 14:24:43 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:45.317 14:24:43 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1355793' 00:37:45.317 killing process with pid 1355793 00:37:45.317 14:24:43 keyring_linux -- common/autotest_common.sh@973 -- # kill 1355793 00:37:45.317 14:24:43 keyring_linux -- common/autotest_common.sh@978 -- # wait 1355793 00:37:45.578 00:37:45.578 real 0m5.188s 00:37:45.578 user 0m9.624s 00:37:45.578 sys 0m1.450s 00:37:45.578 14:24:43 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:45.578 14:24:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:45.578 ************************************ 00:37:45.578 END TEST keyring_linux 00:37:45.578 ************************************ 00:37:45.578 14:24:43 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:37:45.578 14:24:43 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:37:45.578 14:24:43 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:37:45.578 14:24:43 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:37:45.578 14:24:43 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:37:45.578 14:24:43 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:37:45.578 14:24:43 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:37:45.578 14:24:43 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:37:45.578 14:24:43 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:37:45.578 14:24:43 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:37:45.578 14:24:43 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:37:45.578 14:24:43 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:37:45.578 14:24:43 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:37:45.578 14:24:43 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:37:45.578 14:24:43 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:37:45.578 14:24:43 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:37:45.578 14:24:43 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:37:45.578 14:24:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:45.578 14:24:43 -- common/autotest_common.sh@10 -- # set +x 00:37:45.578 14:24:43 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:37:45.578 14:24:43 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:37:45.578 14:24:43 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:37:45.578 14:24:43 -- common/autotest_common.sh@10 -- # set +x 00:37:53.721 INFO: APP EXITING 00:37:53.721 INFO: killing all VMs 00:37:53.721 INFO: killing vhost app 00:37:53.721 INFO: EXIT DONE 00:37:57.025 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:37:57.025 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:37:57.025 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:37:57.025 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:37:57.025 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:37:57.025 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:37:57.025 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:37:57.025 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:37:57.025 0000:65:00.0 (144d a80a): Already using the nvme driver 00:37:57.025 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:37:57.025 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:37:57.025 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:37:57.025 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:37:57.025 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:37:57.025 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:37:57.025 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:37:57.025 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:38:01.234 Cleaning 00:38:01.234 Removing: /var/run/dpdk/spdk0/config 00:38:01.234 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:01.234 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:01.234 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:01.234 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:01.234 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:01.234 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:01.234 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:01.234 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:01.234 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:01.234 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:01.234 Removing: /var/run/dpdk/spdk1/config 00:38:01.234 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:01.234 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:01.234 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:01.234 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:01.234 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:01.234 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:01.234 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:01.234 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:01.234 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:01.234 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:01.234 Removing: /var/run/dpdk/spdk2/config 00:38:01.234 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:01.234 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:01.234 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:01.234 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:01.234 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:01.234 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:01.234 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:01.234 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:01.234 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:01.234 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:01.234 Removing: /var/run/dpdk/spdk3/config 00:38:01.234 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:01.234 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:01.234 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:01.234 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:01.234 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:01.234 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:01.234 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:01.234 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:01.234 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:01.234 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:01.234 Removing: /var/run/dpdk/spdk4/config 00:38:01.234 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:01.234 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:01.234 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:01.234 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:01.234 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:01.234 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:01.234 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:01.234 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:01.234 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:01.234 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:01.234 Removing: /dev/shm/bdev_svc_trace.1 00:38:01.234 Removing: /dev/shm/nvmf_trace.0 00:38:01.234 Removing: /dev/shm/spdk_tgt_trace.pid782728 00:38:01.234 Removing: /var/run/dpdk/spdk0 00:38:01.234 Removing: /var/run/dpdk/spdk1 00:38:01.234 Removing: /var/run/dpdk/spdk2 00:38:01.234 Removing: /var/run/dpdk/spdk3 00:38:01.234 Removing: /var/run/dpdk/spdk4 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1031373 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1036779 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1038882 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1041684 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1041879 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1042067 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1042397 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1043111 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1045463 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1046560 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1047188 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1049719 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1050550 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1051391 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1056366 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1062996 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1062998 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1062999 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1067686 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1077953 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1082741 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1090692 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1092193 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1093871 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1095555 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1101317 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1106353 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1115456 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1115463 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1120524 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1120846 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1121178 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1121528 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1121608 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1127230 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1127848 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1133238 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1136537 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1143099 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1150106 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1160347 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1168838 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1168882 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1191839 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1192522 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1193237 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1194064 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1195185 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1196340 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1197102 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1197889 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1202945 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1203283 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1210425 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1210698 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1217161 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1222241 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1233811 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1234485 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1239536 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1239943 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1245047 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1252406 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1255308 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1267465 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1278131 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1280141 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1281149 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1301325 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1306037 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1309235 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1317004 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1317009 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1322891 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1325115 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1327603 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1328807 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1331306 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1332745 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1342776 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1343376 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1343908 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1346847 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1347512 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1348459 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1353485 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1353540 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1355352 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1355793 00:38:01.234 Removing: /var/run/dpdk/spdk_pid1356124 00:38:01.234 Removing: /var/run/dpdk/spdk_pid781242 00:38:01.234 Removing: /var/run/dpdk/spdk_pid782728 00:38:01.234 Removing: /var/run/dpdk/spdk_pid783572 00:38:01.234 Removing: /var/run/dpdk/spdk_pid784617 00:38:01.234 Removing: /var/run/dpdk/spdk_pid784958 00:38:01.234 Removing: /var/run/dpdk/spdk_pid786018 00:38:01.234 Removing: /var/run/dpdk/spdk_pid786191 00:38:01.234 Removing: /var/run/dpdk/spdk_pid786496 00:38:01.234 Removing: /var/run/dpdk/spdk_pid787633 00:38:01.234 Removing: /var/run/dpdk/spdk_pid788365 00:38:01.234 Removing: /var/run/dpdk/spdk_pid788741 00:38:01.234 Removing: /var/run/dpdk/spdk_pid789071 00:38:01.234 Removing: /var/run/dpdk/spdk_pid789481 00:38:01.234 Removing: /var/run/dpdk/spdk_pid789850 00:38:01.234 Removing: /var/run/dpdk/spdk_pid790171 00:38:01.234 Removing: /var/run/dpdk/spdk_pid790521 00:38:01.234 Removing: /var/run/dpdk/spdk_pid790907 00:38:01.497 Removing: /var/run/dpdk/spdk_pid792439 00:38:01.497 Removing: /var/run/dpdk/spdk_pid796046 00:38:01.497 Removing: /var/run/dpdk/spdk_pid796409 00:38:01.497 Removing: /var/run/dpdk/spdk_pid796790 00:38:01.497 Removing: /var/run/dpdk/spdk_pid796831 00:38:01.497 Removing: /var/run/dpdk/spdk_pid797412 00:38:01.497 Removing: /var/run/dpdk/spdk_pid797514 00:38:01.497 Removing: /var/run/dpdk/spdk_pid797894 00:38:01.497 Removing: /var/run/dpdk/spdk_pid798034 00:38:01.497 Removing: /var/run/dpdk/spdk_pid798275 00:38:01.497 Removing: /var/run/dpdk/spdk_pid798590 00:38:01.497 Removing: /var/run/dpdk/spdk_pid798708 00:38:01.497 Removing: /var/run/dpdk/spdk_pid798971 00:38:01.497 Removing: /var/run/dpdk/spdk_pid799416 00:38:01.497 Removing: /var/run/dpdk/spdk_pid799764 00:38:01.497 Removing: /var/run/dpdk/spdk_pid800174 00:38:01.497 Removing: /var/run/dpdk/spdk_pid804696 00:38:01.497 Removing: /var/run/dpdk/spdk_pid810082 00:38:01.497 Removing: /var/run/dpdk/spdk_pid822127 00:38:01.497 Removing: /var/run/dpdk/spdk_pid822837 00:38:01.497 Removing: /var/run/dpdk/spdk_pid828207 00:38:01.497 Removing: /var/run/dpdk/spdk_pid828566 00:38:01.497 Removing: /var/run/dpdk/spdk_pid833763 00:38:01.497 Removing: /var/run/dpdk/spdk_pid840889 00:38:01.497 Removing: /var/run/dpdk/spdk_pid844387 00:38:01.497 Removing: /var/run/dpdk/spdk_pid857197 00:38:01.497 Removing: /var/run/dpdk/spdk_pid867996 00:38:01.497 Removing: /var/run/dpdk/spdk_pid870239 00:38:01.497 Removing: /var/run/dpdk/spdk_pid871317 00:38:01.497 Removing: /var/run/dpdk/spdk_pid892314 00:38:01.497 Removing: /var/run/dpdk/spdk_pid897182 00:38:01.497 Removing: /var/run/dpdk/spdk_pid954813 00:38:01.497 Removing: /var/run/dpdk/spdk_pid961209 00:38:01.497 Removing: /var/run/dpdk/spdk_pid968401 00:38:01.497 Removing: /var/run/dpdk/spdk_pid976182 00:38:01.497 Removing: /var/run/dpdk/spdk_pid976257 00:38:01.497 Removing: /var/run/dpdk/spdk_pid977287 00:38:01.497 Removing: /var/run/dpdk/spdk_pid978320 00:38:01.497 Removing: /var/run/dpdk/spdk_pid979331 00:38:01.497 Removing: /var/run/dpdk/spdk_pid979977 00:38:01.497 Removing: /var/run/dpdk/spdk_pid980011 00:38:01.497 Removing: /var/run/dpdk/spdk_pid980319 00:38:01.497 Removing: /var/run/dpdk/spdk_pid980356 00:38:01.497 Removing: /var/run/dpdk/spdk_pid980360 00:38:01.497 Removing: /var/run/dpdk/spdk_pid981364 00:38:01.497 Removing: /var/run/dpdk/spdk_pid982369 00:38:01.497 Removing: /var/run/dpdk/spdk_pid983404 00:38:01.497 Removing: /var/run/dpdk/spdk_pid984058 00:38:01.497 Removing: /var/run/dpdk/spdk_pid984107 00:38:01.497 Removing: /var/run/dpdk/spdk_pid984393 00:38:01.497 Removing: /var/run/dpdk/spdk_pid985822 00:38:01.497 Removing: /var/run/dpdk/spdk_pid986999 00:38:01.497 Removing: /var/run/dpdk/spdk_pid997000 00:38:01.497 Clean 00:38:01.759 14:24:59 -- common/autotest_common.sh@1453 -- # return 0 00:38:01.759 14:24:59 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:38:01.759 14:24:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:01.759 14:24:59 -- common/autotest_common.sh@10 -- # set +x 00:38:01.759 14:24:59 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:38:01.759 14:24:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:01.759 14:24:59 -- common/autotest_common.sh@10 -- # set +x 00:38:01.759 14:24:59 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:01.759 14:24:59 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:01.759 14:24:59 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:01.759 14:24:59 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:38:01.759 14:24:59 -- spdk/autotest.sh@394 -- # hostname 00:38:01.759 14:24:59 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:02.020 geninfo: WARNING: invalid characters removed from testname! 00:38:28.606 14:25:25 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:30.517 14:25:28 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:31.899 14:25:30 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:34.447 14:25:32 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:35.834 14:25:34 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:37.748 14:25:35 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:39.166 14:25:37 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:39.166 14:25:37 -- spdk/autorun.sh@1 -- $ timing_finish 00:38:39.166 14:25:37 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:38:39.166 14:25:37 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:39.166 14:25:37 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:38:39.166 14:25:37 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:39.166 + [[ -n 695662 ]] 00:38:39.166 + sudo kill 695662 00:38:39.177 [Pipeline] } 00:38:39.194 [Pipeline] // stage 00:38:39.200 [Pipeline] } 00:38:39.215 [Pipeline] // timeout 00:38:39.220 [Pipeline] } 00:38:39.234 [Pipeline] // catchError 00:38:39.240 [Pipeline] } 00:38:39.256 [Pipeline] // wrap 00:38:39.262 [Pipeline] } 00:38:39.277 [Pipeline] // catchError 00:38:39.288 [Pipeline] stage 00:38:39.291 [Pipeline] { (Epilogue) 00:38:39.305 [Pipeline] catchError 00:38:39.307 [Pipeline] { 00:38:39.321 [Pipeline] echo 00:38:39.323 Cleanup processes 00:38:39.329 [Pipeline] sh 00:38:39.746 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:39.746 1369100 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:39.784 [Pipeline] sh 00:38:40.076 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:40.076 ++ grep -v 'sudo pgrep' 00:38:40.076 ++ awk '{print $1}' 00:38:40.076 + sudo kill -9 00:38:40.076 + true 00:38:40.089 [Pipeline] sh 00:38:40.381 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:52.628 [Pipeline] sh 00:38:52.920 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:52.920 Artifacts sizes are good 00:38:52.936 [Pipeline] archiveArtifacts 00:38:52.943 Archiving artifacts 00:38:53.079 [Pipeline] sh 00:38:53.381 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:53.397 [Pipeline] cleanWs 00:38:53.407 [WS-CLEANUP] Deleting project workspace... 00:38:53.407 [WS-CLEANUP] Deferred wipeout is used... 00:38:53.415 [WS-CLEANUP] done 00:38:53.417 [Pipeline] } 00:38:53.433 [Pipeline] // catchError 00:38:53.444 [Pipeline] sh 00:38:53.735 + logger -p user.info -t JENKINS-CI 00:38:53.747 [Pipeline] } 00:38:53.759 [Pipeline] // stage 00:38:53.765 [Pipeline] } 00:38:53.779 [Pipeline] // node 00:38:53.784 [Pipeline] End of Pipeline 00:38:53.811 Finished: SUCCESS